Once a Grunt task completes, I want to print out some information. See the Grunt snippet below.
Is there a way to achieve this? I noticed that grunt.task.run() does not support callbacks. This causes my message to be printed out prior to coverage report output.
grunt.registerTask('coverage', 'Runs all unit tests available via Mocha and generates code coverage report', function() {
grunt.task.run('env:unitTest', 'mochaTest');
grunt.log.writeln('Code coverage report was generated into "build/coverage.html"');
});
I also want to avoid "hacks" such as creating a grunt task only for printing the information out and adding it to the grunt.task.run() chain of tasks.
Create a task that will run when everything is all done and then add it to your task chain:
grunt.registerTask('alldone', function() {
grunt.log.writeln('Code coverage report was generated into "build/coverage.html"');
});
grunt.registerTask('default', ['env:unitTest', 'mochaTest', 'alldone']);
There is a much better way to do it, without creating an extra task, and modifying anything else.
Grunt is a node process, so you can:
use the process stdout to write what you need
subscribe to the process exit event to do it when a task is finishing its execution
This is a simple example which prints out the time when the tasks has finished their execution:
module.exports = function (grunt) {
// Creates a write function bound to process.stdout:
var write = process.stdout.write.bind(process.stdout);
// Subscribes to the process exit event...
process.on("exit", function () {
// ... to write the information in the process stdout
write('\nFinished at ' + new Date().toLocaleTimeString()+ '\n');
});
// From here, your usual gruntfile configuration, without changes
grunt.initConfig({
When you run any task, you'll see a message at the bottom like:
Finished at 18:26:45
Related
My understanding of jest from observation is that it provides concurrent execution of tests by spawning helper processes and distributes test files to the workers to execute as they finish their current test files.
That suggests to me that jest won't attempt to execute tests in an individual test file concurrently. So I would expect that the following test would always pass (without needing to pass --runInBand):
describe('counting test', () => {
let variable = 0;
it('should start as 1', () => {
variable += 1;
expect(variable).toEqual(1);
});
it('should change to 2', () => {
variable += 1;
expect(variable).toEqual(2);
});
});
I.e. the second test is always run after the first test has finished. Is that safe, and is there an official document somewhere that specifies this behaviour? I couldn't find one.
Since this didn't have an official answer, I added one to the jest documentation after some further research / experimentation (and it was signed off by one of their moderators).
So, yes, jest runs each test in a file sequentially, waiting for each to finish before moving onto the next. This is now described in Setup and Teardown.
Further note that describe blocks are all executed before any of the test blocks.
For reference, the code that implements this is mostly in jest-circus/src/run.ts and eventHandler.ts.
So in Node I can execute a JavaScript file using a command like:
$ node src/someFile.js
But is there a way to execute all of the JavaScript files in a given directory synchronously (one file executes, then after it has finished the next one executes, etc)? Basically, is there a single command that would have the effect of something like
$ node src/firstFile.js
$ node src/secondFile.js
$ node src/thirdFile.js
...
I've tried commands like
$ node src/*.js
but with no success.
If there exists no such command, what's the best way to go about doing something like this?
I am not sure if this is going to work for you because this is a feature of the shell not of the node runtime but..
for f in src/*.js; do node "$f"; done
Or in Powershell:
Get-ChildItem .\*.js | Foreach-Object {
node $_
}
You could use spawn to run a node process from node like
(function() {
var cp = require('child_process');
var childProcess = cp.spawn('node', [`src/firstFile.js`]);
At this point you have to add some listeners:
// now listens events
// Listen for an exit event:
child.on('exit', function(exitCode) {
console.log("Child exited with code: " + exitCode);
return resolve(exitCode);
});
// Listen for stdout data
child.stdout.on('data', function(data) {
console.log(data.toString());
});
// child error
child.stderr.on('data',
function(data) {
console.log('err data: ' + data);
// on error, kill this child
child.kill();
}
);
}).call(this);
Of course you need to serialize execution here, but it's easy since you have the child.on('exit') that tells you that the process ended, so you can start the next one.
Look to Controlling Multiple Processes in Node for my example working solution that run multi processes in node and wait execution to end / join.
Using a POSIX shell:
$ for js in src/*.js; do node "$js"; done
If the calling each one from the shell thing isn't a hard requirement, I would kick them all off with a single node process from the shell. This node script would:
traverse the directory of modules
require the first one, which executes it, and pass a callback which the module will call on completion
When the complete callback is called, execute the next script in your directory.
gulpfile.js
gulp.task('browser-bundle', ['react'], function() {
...
});
gulp.task('react', function(){
gulp.src(options.JSX_SOURCE)
.pipe(react())
.pipe(gulp.dest(options.JSX_DEST))
});
As you can see I have the browser-bundle task depending on the react task. I believe this works as expected because in the output I see this:
[gulp] Running 'react'...
[gulp] Finished 'react' in 3.43 ms
[gulp] Running 'browser-bundle'...
However, although the react task is finished, the files its supposed to write to the operating system are not quite there yet. I've notice that if I put a sleep statement in the browser bundle command then it works as expected, however this seems a little hacky to me.
If I want the react task to not be considered finished until the files (from gulp.dest) have been synchronously written to disk how would I do that?
You need a return statement:
gulp.task('react', function(){
return gulp.src(options.JSX_SOURCE)
.pipe(react())
.pipe(gulp.dest(options.JSX_DEST))
});
With this all my write operations are done before the next task processed.
back to 2019: if some one come here with similar problem
In gulp 4.*, at least, gulp wait for promise to resolve but ignore the result.
so... if you use async await pattern and return the result of gulp.src('...') you got a surprise. the task not wait for stream finish before it continue! somthing that can result to serious bug and waist of time. the solution is "promisify" gulp.src
example:
gulp.task( async function notWaitingTask(){
// the return stream are ignored because function return promise not stream
return gulp.src('file.js')
.pipe(gulp.dest('new-location'))
})
gulp.task( async function waitingTask(){
// the return stream are respect
await promisifyStream(
gulp.src('file.js')
.pipe(gulp.dest('new-location'))
)
})
function promisifyStream(stream) {
return new Promise( res => stream.on('end',res));
}
The accepted answer is spot on, but as per https://github.com/gulpjs/gulp/issues/899, in the 3.x branch of gulp, you cannot do this with dependencies without a bit of extra special sauce:
var run = require('run-sequence');
var nodeunit = require('gulp-nodeunit');
var babel = require('gulp-babel');
var gulp = require('gulp');
/// Explicitly run items in order
gulp.task('default', function(callback) {
run('scripts', 'tests', callback);
});
/// Run tests
gulp.task('tests', function() {
return gulp.src('./build/**/*.tests.js').pipe(nodeunit());
});
// Compile ES6 scripts using bable
gulp.task('scripts', function() {
return gulp.src('./src/**/*.js')
.pipe(babel())
.pipe(gulp.dest('./build'));
});
Notice specifically the use of the 'run-sequence' module to force the tasks to run one after another, as well.
(Without run, rm -rf build && gulp will result in OK: 0 assertions (0ms) because the tests task will not find the files created by the scripts task because it starts before the scripts task is completely resolved)
Met same issue here. Let's say there are 2 tasks, First and Second. Second runs after First.
The First task generates some files, which are to be read by the Second task. Using dependency doesn't make sure the Second task can find the files generated.
I have to explicitly using the done callback on the pipeline to let Second only starts after First truly done.
//This approach works!
gulp.task('First', function(done)) {
var subFolders = fs.readdirSync(somefolder)...
var tasksForFolders = subFolders.map(function(folder) {
return gulp.src('folder/**/*').sthtogeneratefiles();
});
tasksForFolders[tasksForFolders.length-1].on('end',done);
return tasksForFolders;
}
gulp.task('Second', ['First'],function() {
return gulp.src('generatedfolders/**/*').doth();
}
Without the done trick, the Second never finds the files generated by First. Below shows what I tried, the Second task can find the files generated by calling gulp First by hand, and then calling gulp Second subsequently.
//This is the WRONG approach, just for demonstration!!
gulp.task('First', function()) {
var subFolders = fs.readdirSync(somefolder)...
var tasksForFolders = subFolders.map(function(folder) {
return gulp.src('folder/**/*').sthtogeneratefiles();
});
return tasksForFolders;
}
gulp.task('Second', function() {
return gulp.src('generatedfolders/**/*').doth();
}
I'm automating running the ECMA-402 test suite against the Intl polyfill I wrote, and I've hit some problems. Currently, the tests are run against a fully-built version of the library, which means having to recompile every time a change is made before the tests can run. I'm trying to improve it by splitting the code up into separate modules and using require to run the tests.
The main problem comes into focus when I try and run the tests using the vm module. If I add the polyfill to the test's sandbox, some of the tests fail when checking native behaviour — the polyfill's objects don't inherit from the test context's Object.prototype, for example. Passing require to the tests will not work because the modules are still compiled and executed in the parent's context.
The easiest solution in my head was to spawn a new node process and write the code to the process's stdin, but the spawned node process doesn't execute the code written to it and just waits around forever. This is the code I tried:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv);
test.stdin.write(content, 'utf8');
// cb runs the next test
test.on('exit', cb);
}
Does anyone have any idea why Node.js doesn't execute the code written to its stdin stream, or if there's another way I can get the module to compile in the same context as the tests?
You must close the stdin for the child process to consume data and exit. Do this when you are done passing code.
test.stdin.end();
In the end, I chose to use the -e command line switch to pass the code directly to the new node instance. It only took a slight modification to the code:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv.concat('-e', content));
// cb runs the next test
test.on('exit', cb);
}
I'm using grunt to have some tasks done every time I change my code ( jshint for example ) and I want to reload a phantomJs process every time I have changes.
The first way I found is to use grunt.util.spawn to run phantomJs the first time.
// http://gruntjs.com/api/grunt.util#grunt.util.spawn
var phantomJS_child = grunt.util.spawn({
cmd: './phantomjs-1.9.1-linux-x86_64/bin/phantomjs',
args: ['./phantomWorker.js']
},
function(){
console.log('phantomjs done!'); // we never get here...
});
And then, every time watch restarts, another task uses grunt.util.spawn to kill the phantomJs process, which is of course VERY ugly.
Is there any better way to do it?
The thing is that the phantomJs process is not teminating because I use it as a webserver to server a REST API with JSON.
Can I have a grunt callback or something whenever watch kicks in so I can close my previous phantomJs process before I re-run the task to create a new one?
I used grunt.event to make a handler, but I cannot see how to access the phantomjs process in order to kill it.
grunt.registerTask('onWatchEvent',function(){
// whenever watch starts, do this...
grunt.event.on('watch',function(event, file, task){
grunt.log.writeln('\n' + event + ' ' + file + ' | running-> ' + task);
});
});
This entirely untested code could be a solution for your problem.
Node's native child spawning function exec immediately returns a reference to the child process, which we can keep around to later kill it. To use it we can create a custom grunt task on the fly, like so:
// THIS DOESN'T WORK. phantomjs is undefined every time the watcher re-executes the task
var exec = require('child_process').exec,
phantomjs;
grunt.registerTask('spawn-phantomjs', function() {
// if there's already phantomjs instance tell it to quit
phantomjs && phantomjs.kill();
// (re-)start phantomjs
phantomjs = exec('./phantomjs-1.9.1-linux-x86_64/bin/phantomjs ./phantomWorker.js',
function (err, stdout, stderr) {
grunt.log.write(stdout);
grunt.log.error(stderr);
if (err !== null) {
grunt.log.error('exec error: ' + err);
}
});
// when grunt exits, make sure phantomjs quits too
process.on('exit', function() {
grunt.log.writeln('killing child...');
phantomjs.kill();
});
});