I need to build a sequence of Gulp tasks like this:
Task1 -> Task2A, Task2B, Task2C -> Task3,
where tasks 2A,2B,2C run in parallel(but task1 should be completed before, and completed just once).
What I tried:
gulp.task('Task1', []);
gulp.task('Task2A', ['Task1']);
gulp.task('Task2B', ['Task1']);
gulp.task('Task2C', ['Task1']);
gulp.task('Task3', ['Task2A', 'Task2B', 'Task2C']);
Looks like it's working, but I'm not sure, does it guarantee that Task1 will be executed only 1 time, or it can be triggered multiple times?
Thank you.
Perhaps the simplest way to do this (without using gulp4.0) is the run-sequence plugin.
For your case:
var runSequence = require('run-sequence');
gulp.task('build', function(callback) {
runSequence( 'Task1',
['Task2A', 'Task2B', 'Task2C'],
'Task3',
callback);
});
Make sure you have return statements in all your tasks.
Please Note
This was intended to be a temporary solution until the release of gulp 4.0 which should have support for defining task dependencies similarly.
Given that Gulp 4 appears to never be fully released, take that for what you will. Be aware that this solution is a hack, and may stop working with a future update to gulp.
Try this structure
gulp.task('Task1', []);
gulp.task('Task2A', ['Task1']);
gulp.task('Task2B', ['Task2A']);
gulp.task('Task2C', ['Task2B']);
gulp.task('Task3', ['Task2C']);
Also, you can run task inside another task, like this:
gulp.task('task1', function(){
gulp.start('task2');
})
Related
I'd like to watch multiple files, when they change I'd like to run MsBuild and fire a reload with BrowserSync when the build is finished. So far I've got this "watcher":
gulp.watch([config.templatePath+'/**/*','!'+config.templatePath+'/assets/stylesheets/**/*'],['build']).on('change', function(file) {
browsersync.reload(file);
});
And this build task:
gulp.task('build', function() {
return gulp
.src(config.projectFile)
.pipe(msbuild({
toolsVersion: 12.0
}));
});
This is working fine, but the browser is reloaded before the build is finished. First I thought is was a problem with gulp-msbuild but I was forgotten the return, see: https://github.com/hoffi/gulp-msbuild/issues/8.
The build task is fired before the reload, but it's not waiting until it's completed. Any ideas how to fix this?
Thanks in advance!
you'll need a separate task for that, like build_reload:
gulp.task('build_reload', function() {
return gulp
.src(config.projectFile)
.pipe(msbuild({
toolsVersion: 12.0
}))
.on('end', function() {
browsersync.reload();
}));;
});
then change the watch task to:
gulp.watch([config.templatePath + '/**/*', '!' + config.templatePath + '/assets/stylesheets/**/*'], ['build_reload']);
this way it's only going to reload after everything finished.
Consider this example given on the BrowserSync + Gulp page regarding Browser Reloading, especially this part:
// use default task to launch BrowserSync and watch JS files
gulp.task('default', ['browser-sync'], function () {
// add browserSync.reload to the tasks array to make
// all browsers reload after tasks are complete.
gulp.watch("js/*.js", ['js', browserSync.reload]);
});
As task dependencies are run asynchronously (here: the js and browserSync.reload) couldn't it happen that the reload finishes before the js task?
Yes, according to the documentation, that's a possibility.
Off that same page...
(make sure you return the stream from your tasks to ensure the browser is reloaded at the correct time)
If it's an async task it will just fire and not return anything, and the watcher will not know to refresh. Or it may reload before the process is done.
To get around this, you should be adding callbacks to your tasks.
gulp.task('somename', function() {
var stream = gulp.src('client/**/*.js')
.pipe(minify())
.pipe(gulp.dest('build'));
return stream;
});
Just return the stream so Gulp knows what is up. Then set the watch for the task you want:
gulp.task('default', ['browser-sync'], function () {
// Watched tasks are run in parallel, not in series.
gulp.watch(['*.js'], ['somename', browserSync.reload]);
});
This is all included in the documentation:
https://github.com/gulpjs/gulp/blob/master/docs/API.md#async-task-support
gulpfile.js
gulp.task('browser-bundle', ['react'], function() {
...
});
gulp.task('react', function(){
gulp.src(options.JSX_SOURCE)
.pipe(react())
.pipe(gulp.dest(options.JSX_DEST))
});
As you can see I have the browser-bundle task depending on the react task. I believe this works as expected because in the output I see this:
[gulp] Running 'react'...
[gulp] Finished 'react' in 3.43 ms
[gulp] Running 'browser-bundle'...
However, although the react task is finished, the files its supposed to write to the operating system are not quite there yet. I've notice that if I put a sleep statement in the browser bundle command then it works as expected, however this seems a little hacky to me.
If I want the react task to not be considered finished until the files (from gulp.dest) have been synchronously written to disk how would I do that?
You need a return statement:
gulp.task('react', function(){
return gulp.src(options.JSX_SOURCE)
.pipe(react())
.pipe(gulp.dest(options.JSX_DEST))
});
With this all my write operations are done before the next task processed.
back to 2019: if some one come here with similar problem
In gulp 4.*, at least, gulp wait for promise to resolve but ignore the result.
so... if you use async await pattern and return the result of gulp.src('...') you got a surprise. the task not wait for stream finish before it continue! somthing that can result to serious bug and waist of time. the solution is "promisify" gulp.src
example:
gulp.task( async function notWaitingTask(){
// the return stream are ignored because function return promise not stream
return gulp.src('file.js')
.pipe(gulp.dest('new-location'))
})
gulp.task( async function waitingTask(){
// the return stream are respect
await promisifyStream(
gulp.src('file.js')
.pipe(gulp.dest('new-location'))
)
})
function promisifyStream(stream) {
return new Promise( res => stream.on('end',res));
}
The accepted answer is spot on, but as per https://github.com/gulpjs/gulp/issues/899, in the 3.x branch of gulp, you cannot do this with dependencies without a bit of extra special sauce:
var run = require('run-sequence');
var nodeunit = require('gulp-nodeunit');
var babel = require('gulp-babel');
var gulp = require('gulp');
/// Explicitly run items in order
gulp.task('default', function(callback) {
run('scripts', 'tests', callback);
});
/// Run tests
gulp.task('tests', function() {
return gulp.src('./build/**/*.tests.js').pipe(nodeunit());
});
// Compile ES6 scripts using bable
gulp.task('scripts', function() {
return gulp.src('./src/**/*.js')
.pipe(babel())
.pipe(gulp.dest('./build'));
});
Notice specifically the use of the 'run-sequence' module to force the tasks to run one after another, as well.
(Without run, rm -rf build && gulp will result in OK: 0 assertions (0ms) because the tests task will not find the files created by the scripts task because it starts before the scripts task is completely resolved)
Met same issue here. Let's say there are 2 tasks, First and Second. Second runs after First.
The First task generates some files, which are to be read by the Second task. Using dependency doesn't make sure the Second task can find the files generated.
I have to explicitly using the done callback on the pipeline to let Second only starts after First truly done.
//This approach works!
gulp.task('First', function(done)) {
var subFolders = fs.readdirSync(somefolder)...
var tasksForFolders = subFolders.map(function(folder) {
return gulp.src('folder/**/*').sthtogeneratefiles();
});
tasksForFolders[tasksForFolders.length-1].on('end',done);
return tasksForFolders;
}
gulp.task('Second', ['First'],function() {
return gulp.src('generatedfolders/**/*').doth();
}
Without the done trick, the Second never finds the files generated by First. Below shows what I tried, the Second task can find the files generated by calling gulp First by hand, and then calling gulp Second subsequently.
//This is the WRONG approach, just for demonstration!!
gulp.task('First', function()) {
var subFolders = fs.readdirSync(somefolder)...
var tasksForFolders = subFolders.map(function(folder) {
return gulp.src('folder/**/*').sthtogeneratefiles();
});
return tasksForFolders;
}
gulp.task('Second', function() {
return gulp.src('generatedfolders/**/*').doth();
}
I have this friend, who hates RequireJS, I recently started using it & I like it, but by his arguments, I started feeling like I might end up like him and give up using RequireJS.
require(['one'],function(){
require(['two'],function(){
require(['three'],function(){
});
});
});
Above code is fairly straight-forward, three depends on two & two depends on one. That makes the browser to load one first & then two & then three in sequence, that's how requirejs handles it, which makes the site real slow if there are lots of these kinds of dependencies.
Browser parallel loading feature it not used at all.
I want to know if there is any way in which require loads all these files asyncronously but makes sure to maintain the order in their execution, so that browser parallelism can be used.
RequireJS is a powerful tool that allows us to load script asynchronously (means we start the loading of each one, and don't wait until it is actually loaded), but still manage dependencies (if one file depends on another, we wanna wake sure the dependency will be loaded beforehand). The way you use RequireJS is not what it is made for. The callback function inside the require is called as soon as the dependency module is loaded ('one', 'two', 'three'). So you are just loading all the modules sequentially, not asynchronously (one is loaded -> callback function is called -> two is loaded -> callback function is called -> three is loaded -> callback function is called). That makes no sense. The way it is supposed to be:
in your HTML file:
<script data-main='main.js' src='require.js'></script>
in your main.js file (some file you wrote inside your script tag as data-main):
require(['one'], function(one){
one.function1();
});
in one.js:
define(['two', 'three'], function(two, three) {
return {
function1 : function() {
two.function2();
three.function3();
console.log('one');
}
}
});
in two.js:
define([], function() {
return {
function2 : function() {
console.log('two');
}
}
});
in three.js:
define([], function() {
return {
function3 : function() {
console.log('three');
}
}
});
In my example 'one' depends on 'two' and 'three' (it requires function2() from 'two' and function3() from 'three'), 'two' and 'three' have no dependencies. This example code assumes all the files are in one folder (including require.js). As a result, we see 'two', 'three', 'one' printed (in that order).
Although RequireJS uses AMD model for loading scripts, we can still manage module evaluation order by ourselves. If you don't want to use define(), you can use a special plugin called order!. It works for RequireJS 1.0 API. It allows to fetch files asynchronously, but make evaluation in a specific order: http://requirejs.org/docs/1.0/docs/api.html#order.
The accepted solution will work nicely for most use-cases, especially because it will usually make sense to use r.js to bundle everything, which makes parallel loading a moot point. However, this solution does not actually allow for loading all modules in parallel, instead either loading in a 3-step sequence that looks like: [main.js] -> [one.js] -> [two.js, three.js] (if you don't use r.js to package the files all to one module) or a single load of one packaged file (if you do use r.js to package all the files to one module).
If you do in fact want to make the files load in a single parallel step like: [one.js, two.js, three.js], you have a couple of options:
A. Use RequireJS 1.0 + order plugin
This one is covered in gthacoder's other answer.
B. Wrap the scripts so that Require can load them, then execute them in a separate stage
This introduces some complexity, but is very reliable. The key thing is that every module that you want to load in parallel should contain a named module inside it that does not match the name you use to load the file. This will prevent the module from executing until you explicitly tell it to:
one.js
define('one-inner', ['two-inner'], function () {
...
});
two.js
define('two-inner', ['three-inner'], function () {
...
});
three.js
define('three-inner', function () {
...
});
Your page or main.js file
// 1. Require loads one.js, two.js, and three.js in parallel,
// but does not execute the modules, because nothing has called
// them by the correct name yet.
require(['one', 'two', 'three']), function () {
// 2. Kickstart execution of the modules
require(['one-inner'], function () {
....
}
});
Once a Grunt task completes, I want to print out some information. See the Grunt snippet below.
Is there a way to achieve this? I noticed that grunt.task.run() does not support callbacks. This causes my message to be printed out prior to coverage report output.
grunt.registerTask('coverage', 'Runs all unit tests available via Mocha and generates code coverage report', function() {
grunt.task.run('env:unitTest', 'mochaTest');
grunt.log.writeln('Code coverage report was generated into "build/coverage.html"');
});
I also want to avoid "hacks" such as creating a grunt task only for printing the information out and adding it to the grunt.task.run() chain of tasks.
Create a task that will run when everything is all done and then add it to your task chain:
grunt.registerTask('alldone', function() {
grunt.log.writeln('Code coverage report was generated into "build/coverage.html"');
});
grunt.registerTask('default', ['env:unitTest', 'mochaTest', 'alldone']);
There is a much better way to do it, without creating an extra task, and modifying anything else.
Grunt is a node process, so you can:
use the process stdout to write what you need
subscribe to the process exit event to do it when a task is finishing its execution
This is a simple example which prints out the time when the tasks has finished their execution:
module.exports = function (grunt) {
// Creates a write function bound to process.stdout:
var write = process.stdout.write.bind(process.stdout);
// Subscribes to the process exit event...
process.on("exit", function () {
// ... to write the information in the process stdout
write('\nFinished at ' + new Date().toLocaleTimeString()+ '\n');
});
// From here, your usual gruntfile configuration, without changes
grunt.initConfig({
When you run any task, you'll see a message at the bottom like:
Finished at 18:26:45