I have a grunt task like so:
grunt.registerTask 'test', ['clean:test',
'coffee:test',
'mochaTest',
'clean:test']
If mochaTest returns with a fail code, the last clean won't run and will leave unwanted transpiled files.
It doesn't throw an error so I can't try/catch/finally and google/reading source code doesn't provide me with a solution except for manually running grunt clean:test after each fail.
Am I going about this the wrong way, or is there a nifty way to do something similar to a finally-block?
(I know I can run mocha using coffee-files, this is a simplified example problem)
The reason the final 'clean:test' doesn't run is that Grunt is designed to stop on failures. If you used the --force option (grunt test --force) you could cause it to continue after the failure of mochaTest. This is not a good habit to get into, or would not make sense in cases where your chain of tasks was longer, more interdependent, or more complex.
What I did to solve this was in my Makefile to add
test: grunt test && grunt clean:test
to ensure a proper cleanup even after a failed test.
$make test
to run it.
Related
I am looking to make my automation tests a bit more flexible. I have a QA team that does not know much javascript and possibly have to design the tests for users with no or little programming skills.
I had a few scripts created using mocha test framework and spectron.js (for an app built with electrion.js) test a few features of the product I dont want to run every single test every time I run the script. My temporary solution is bundle the tests in to a function as a "suite". Like this -
function DiagnosticSuite(location, workstation, workflowName){
CreateWorkflow(location, workflowName);
SetWorkFlowToStation(location, workstation, workflowName);
DiagnosticTestFlow();
return;
}
function PowerflowSuite(imei, location, workstation, workflowName){
SetWorkFlowToStation(location, workstation, workflowName);
powerOffFlow(imei);
return;
}
I was thinking of using Inquirer and use a conditional based on input to run one of the tests above. Like this -
inquirer.prompt([
{
type: 'list',
name: 'Which workflow do you want to run?',
choices: ['Power Off', 'Diagnostic']
}
]).then((answers) => {
if(answers == 'Power Off'){
PowerflowSuite(imei, location, workstation, workflowName);
}
})
How ever when I test that Mocha seems to not wait for the user input from inquerior to run the tests and I get an output like this -
$ npm test
> metistests#1.0.0 test C:\Users\DPerez1\Desktop\metis-automation
> mocha
? Which workflow do you want to run?: (Use arrow keys)
> Power Off
Diagnostic
0 passing (0ms)
Seems like it runs and doesnt see a any tests and finishes and when I select the answer the program just closes.
I am wondering why Mocha does this and if its possible to run my existing mocha scripts with a library like inquirer.
So I found a solution to my problem in case anyone stumbles here and is wondering.
I separated the CLI portion and the Mocha portion into different scripts on the package.json file. That way I can use nodes child process library to run the mocha part and pass the information from the CLI part via the process.argv objject.
So the CLI part will ask me what test to run, what env, and what user. And I create a command to put in child process exec function (I think there are others to use but not important.) When mocha tests run I parse the argvs and pass them into the functions so that the test can run based on that.
I have implemented a module that should look exactly like a regular ES6 Promise; and I want to test it as such.
The files that test promises in the node tests are here:
https://github.com/nodejs/node/blob/master/deps/v8/test/mjsunit/es6/promises.js
However I cannot work out how to run these files. After putting in the requisite require for my own module the files fail with syntax errors of missing functions. I seem to be missing some kind of testing suite but can't work out which it is.
The syntax errors are for missing function: eg. describe
I have inserted a line at the top of the file like so:
var Promise = require('my-promise-module');
this module should appear no different to:
module.exports = Promise;
I am attempting to run the file like node ./promise.js (where promise.js is the linked test file).
This is obviously not the right way to run it as many of the functions called in the test file are not available (e.g. %abortJS), though I cannot work out what the test file needs to run.
I'm completely new to sails, node and js in general so I might be missing something obvious.
I'm using sails 0.10.5 and node 0.10.33.
In the sails.js documentation there's a page about tests http://sailsjs.org/#/documentation/concepts/Testing, but it doesn't tell me how to actually run them.
I've set up the directories according to that documentation, added a test called test/unit/controllers/RoomController.test.js and now I'd like it to run.
There's no 'sails test' command or anything similar. I also didn't find any signs on how to add a task so tests are always run before a 'sails lift'.
UPDATE-2: After struggling a lil bit with how much it takes to run unit test this way, i decided to create a module to load the models and turn them into globals just as sails does, but without taking so much. Even when you strip out every hook, but the orm-loader depending on the machine, it can easily take a couple seconds WITHOUT ANY TESTS!, and as you add models it gets slower, so i created this module called waterline-loader so you can load just the basics (Its about 10x faster), the module is not stable and needs test, but you are welcome to use it or modify it to suit your needs, or help me out to improve it here -> https://github.com/Zaggen/waterline-loader
UPDATE-1:
I've added the info related to running your tests with mocha to the docs under Running tests section.
Just to expand on what others have said (specially what Alberto Souza said).
You need two steps in order to make mocha work with sails as you want. First, as stated in the sails.js Docs you need to lift the server before running your test, and to do that, you create a file called bootstrap.test.js (It can be called anything you like) in the root path (optional) of your tests (test/bootstrap.test.js) that will be called first by mocha, and then it'll call your test files.
var Sails = require('sails'),
sails;
before(function(done) {
Sails.lift({
// configuration for testing purposes
}, function(err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
sails.lower(done);
});
Now in your package.json, on the scripts key, add this line(Ignore the comments)
// package.json ....
scripts": {
// Some config
"test": "mocha test/bootstrap.test.js test/**/*.test.js"
},
// More config
This will load the bootstrap.test.js file, lift your sails server, and then runs all your test that use the format 'testname.test.js', you can change it to '.spec.js' if you prefer.
Now you can use npm test to run your test.
Note that you could do the same thing without modifying your package.json, and typying mocha test/bootstrap.test.js test/**/*.test.js in your command line
PST: For a more detailed configuration of the bootstrap.test.js check Alberto Souza answer or directly check this file in hist github repo
See my test structure in we.js: https://github.com/wejs/we-example/tree/master/test
You can copy and paste in you sails.js app and remove we.js plugin feature in bootstrap.js
And change you package.json to use set correct mocha command in npm test: https://github.com/wejs/we-example/blob/master/package.json#L10
-- edit --
I created a simple sails.js 0.10.x test example, see in: https://github.com/albertosouza/sails-test-example
Given that they don't give special instructions and that they use Mocha, I'd expect that running mocha from the command line while you are in the parent directory of test would work.
Sails uses mocha as a default testing framework.
But Sails do not handle test execution by itself.
So you have to run it manually using mocha command.
But there is an article how to make all Sails stuff included into tests.
http://sailsjs.org/#/documentation/concepts/Testing
I would like to have access to the many available plugins and tasks in the grunt ecosystem to make my life easier, but I would like control over when and how each task is run. Most importantly, I want a way to run grunt tasks programmatically instead of running grunt from the command line in a folder with a Gruntfile. So I started poking around grunt-cli and grunt for a "way in."
From the source code of GruntJS:
// Expose the task interface. I've never called this manually, and have no idea
// how it will work. But it might.
grunt.tasks = function(tasks, options, done) {
...
As you can see, Mr. Allman cautions us about the interface... my question is, has anyone gotten this to work?
My experiments, so far, have led me to believe the best way to programmatically control grunt is by mimicking the command line call with a child process:
$ npm install grunt-cli //notice no -g flag
// From runner.js
var exec =require('child_process').exec
exec('node_modules/.bin/grunt-cli tasks to run', {
cwd: 'path/to/directory/with/a/gruntfile'
}, function() { /* do stuff here */ });
This seems dirty so I'm thinking about simply writing my own task-runner that exposes an interface for grunt tasks. However, I don't want to dup work if someone has had success with grunt.tasks() despite Mr. Allman's warnings.
The obvious answer seems like it should be write a grunt task to do whatever you want to do :)
Then you can use grunt.task.run() to control other grunt tasks: http://www.gruntjs.org/article/grunt_task.html
You can also update their configs dynamically before running them very easily by messing with grunt.config
There's also this answer which may answer your question:
How can I run a grunt task from within a grunt task?
Also check out grunt.task.start() which is not publicy documented, but it seems to kick off all of the tasks https://github.com/gruntjs/grunt/blob/master/lib/util/task.js#L246 (hat tip: #jibsales)
Maybe this would help you to write a custom handler : https://www.npmjs.com/package/rungrunttask
Usage details
var RunGruntTask = require('rungrunttask').RunGruntTask;
var taskname = 'some grunt task such as backup database every 24hours';
RunGruntTask(taskname);
I have been having great fun getting started with Grunt, but have come across a situation where I don't know what the best course of action is.
tl;dr
Can grunt throw a warning without aborting the task? grunt --force will do that, but that applies the force option to all tasks.
Issue: on running a task that includes jasmine (from grunt-contrib-jasmine), if it cannot find a spec with at least one unit test in it (i.e. it("does stuff, function () {});) then it throws a warning and therefore aborts the whole task.
Code
Here is the task I'm using to build up my site:
grunt.registerTask('build', ['clean', 'sass', 'images', 'favicons', 'lint', 'minify', 'jasmine', 'hashify']);
and here is the jasmine task configuration:
jasmine: {
testdev: {
src: folders.js.src + '/**/*.js',
options: {
'specs': folders.spec.src + '/*Spec.js',
'helpers': folders.spec.src + '/*Helper.js'
}
},
//etc etc, more targets for minified code testing and istanbul coverage
}
Are any of these sensible solutions?
Option 1) I can use grunt --force but am reluctant to because it will affect other processes that I might want to genuinely fail the task.
Option 2) Warn but don't fail. Does Grunt have a STDOUT warning that doesn't abort the task?
Option 3) I could fork the plugin and add the option 'force' just to the jasmine task so it continues on, but will still log its warning to the console.
Option 4) Grunt creates an empty dummy spec if one is not found before running jasmine. This seems a bit clunky to me.
There may be an even better solution that I've not yet thought of.
Thanks in advance.
Have a look at Testem. I answered this in another unit testing question, but the nice thing about it is it's like a watch task for your tests; update your code, it runs, something goes wrong, it logs it on the console (and through the browser that you run it through).
Also, because it uses a browser to run the tests through, you can have a tab next to your project's code which will show a count of the tests that pass/fail. i.e. Testem (60/60)
https://github.com/airportyh/testem
If you want to run this as part of your build process as well, there's grunt-testem. Be careful that you quit the testem runner before running this task or it will fail. Hope this helps. :-)