Anyone know how to run code whenever the rollup watcher fires? I saw some references to trapping watcher events, like 'start', but I'm using a rollup.config.js file and I have no idea where and how I'd check for such events. FYI, I'm learning service workers and I want to modify the service worker file (appending a '\n' would be sufficient) whenever any of my source code changes.
On a separate forum, I received the following answer. It sounds correct to me, but I haven't implemented it yet so take it for what it's worth:
if you have your npm run dev command run node some-script.js and have some-script.js do something similar to https://rollupjs.org/guide/en/#rollupwatch then you can do a watcher.on('event' call and re-run w/e it is you want to run.
you can use your existing config, and import it into some-script.js to pass as options to rollup.watch, with the addition of any watch specific options you'd like to add.
In many cases, the best way to do this is by writing a rollup plugin. Plugins have a watchChange callback that you can use to something when changes are detected. If you only want to do it when certain files change, you could use the minimatch library to check if the changed file matches a glob you pass in or configure somewhere.
This is a sketch of the solution I came up with:
function demoWatcherPlugin(globs) {
let doTheAction = false
return {
watchChange(id) {
// Does the file that just changed match one of the globs passed into
// this plugin?
const relPath = path.relative(__dirname, id)
if (
globs.some(
(item) => minimatch.match([relPath], item).length > 0
)
) {
doTheAction = true
}
},
async buildEnd() {
if (doTheAction) {
// Do the action you want to perform when certain files change
}
},
}
}
Usage (in rollup.config.js):
plugins: [
demoWatcherPlugin(['src/foo/**/*.js']),
Related
I'm using Webdriver.io to run tests on a large number of pages. Because all the specs for the pages are in a JSON file, I have a special class that sets up the test. It looks like this:
module.exports = class PageTester {
suiteName = '';
browser = {};
constructor (suiteName, browser) {
this.suiteName = suiteName;
this.browser = browser;
}
testModel(currentModel) {
describe(this.suiteName + ' endpoint ' + currentModel.url, () => {
this.browser.url(currentModel.url);
/* it() statements for the test */
});
}
}
Then in my specs folder I have a file that loads the JSON and plugs it into the PageTester class, like this:
const PageTester = require('../modules/PageTester');
const models = require('/path/to/some/file.json');
const pageTester = new PageTester('Some Name', browser);
for (const modelName in models) {
pageTester.testModel(models[modelName]);
}
When I run this code, WebdriverIO gives me the following warning:
WARN #wdio/mocha-framework: Unable to load spec files quite likely because they rely on `browser` object that is not fully initialised.
`browser` object has only `capabilities` and some flags like `isMobile`.
Helper files that use other `browser` commands have to be moved to `before` hook.
Spec file(s): /suite/test/specs/test.js
All the tests seem to run fine, so I don't actually understand what this warning is complaining about and what negative consequences ignoring it may have. So I would like to a) understand why this is happening and b) how it would be possible to get rid of this warning given the way my code is set up.
In my case, I resolve it by fixing the path for the require files. I noticed that my path was wrong. But the error that wdio throws is not really helpful. :/
you can only interact with browser object inside it blocks because it is not fully accessible before the browser session is started.
See https://webdriver.io/blog/2019/11/01/spec-filtering.html for details.
You simply should ensure your spec file and respective page file are kept on a similar folder structure.
EDIT: As I wrote IN THE TITLE AND FROM THE BEGINNING, this is not about command-line parameters and is thus NOT A DUPLICATE. //EDIT
I have a Sass setup with an indefinite number of uniquely-designed pages (page_1, page_2, etc), each having their own sass/pages/page_1/page_1.scss file.
The pages all belong to the same website, and each page's sass file #imports the same set of files from a sass/includes folder.
With a basic gulp task watching sass/**/*, every page's styles get compiled anytime I make a change to any page's styles. Obviously this doesn't scale well.
I tried using gulp-watch, but it doesn't catch if changes are made to one of the included .scss files. It only catches changes made to the files that actually get compiled into an equivalent .css.
For the purposes of having my gulpfile be as DRY as possible, the best solution I could come up with was to maintain a basic array of folder names in gulpfile.js, and to loop through and watch each of them separately, using the same sass-compiling task for each folder.
var pageFolderNames = [
'page_1',
'page_2'
// etc
];
Then for the gulp task, I have:
gulp.task('watch_pages', function()
{
// Get array length
var numPages = pageFolderNames.length;
// Add a new watch task for each individual page
for (var i = 0; i < numPages; i++)
{
gulp.watch('sass/pages/' + pageFolderNames[i] + '/**/*.scss', ['sass_page']);
}
});
The (simplified) task that compiles sass:
// Task: Compile page-specific Sass
gulp.task('sass_page', function()
{
return gulp.src('sass/pages/' + pageFolderNames[i] +'/**/*.scss')
.pipe(plumber(plumberErrorHandler))
.pipe(sass(...))
.pipe(gulp.dest('css/pages/' + pageFolderNames[i]));
});
This approach (I know my JS-fu is weaksauce) results in an error:
'sass_page' errored after 71 μs
ReferenceError: i is not defined
Is there any way to pass parameters, such as i, to gulp tasks to get this working? Alternately, is there a better way to accomplish what I'm trying to do? I have a sneaking suspicion there is. :-/
I found out there is an on change event for gulp watch. So this might be what you're looking for:
var pagesDir = 'sass/pages/';
gulp.task('watch_pages', function() {
gulp.watch(pagesDir + '**/*')
.on("change", function(file) {
// absolute path to folder that needs watching
var changedDest = path.join(__dirname, pagesDir);
// relative path to changed file
var changedFile = path.relative(changedDest, file.path);
// split the relative path, get the specific folder with changes
var pageFolder = changedFile.split('\\')[0];
gulp.src(path.join(pagesDir, pageFolder) +'/**/*.scss')
.pipe(plumber(plumberErrorHandler))
.pipe(sass(...))
.pipe(gulp.dest('css/pages/' + pageFolder));
console.log(changedDest);
console.log(changedFile);
console.log(pageFolder);
});
});
Also, this way you don't need to declare the folder variables. If you add directories within the path being watched, it should pick it up and name the destination folder accordingly.
Theoretically the gulp task to compile sass should work within the watch task. I played around with the paths, and it seems to spitting them out. Let me know what happens, I can modify if necessary.
The required packages:
var gulp = require("gulp"),
path = require("path"),
rimraf = require("rimraf");
BTW, since you already have access to the file path, you can perhaps target the specific scss file instead of the whole directory.
As Brian answered, the best approach is to have one watcher. In the same way as the principle of delegation with dom event listeners. it's better, even in our case it can not really matter. We need our thing done. if the consumed resources and performance doesn't bother us. It's ok. But stay as Brian answered. one watcher is the best approach. then you've got to get the file that was changed. and from there you get your page folder. So for that i will not add a thing. Except that you don't necessarily need to use the .on("change". You can directly set the same parameter for your function as this example show:
watch('./app/tempGulp/json/**/*.json', function (evt) {
jsonCommentWatchEvt = evt
gulp.start('jsonComment')
})
evt here is what Brian set as file.
What i want to add, is about how to pass a parameter from your watcher to your task. For example about the i in your initial work. A way of doing that, that i see and use, is to set a global variable that hold the data wanted to be passed. So it's accessible in both blocks. You set the value in watcher just before starting the task. And then you use it in the task. A good practice, is to save it in a task local variable just at start. That way you avoid the problem that it can change by another watch handling triggering.
For a lively example. check my answer out here : https://stackoverflow.com/a/49733123/7668448
I have a SPA (in Aurelia / TypeScript but that should not matter) which uses SystemJS. Let's say it runs at http://spa:5000/app.
It sometimes loads JavaScript modules like waterservice/external.js on demand from an external URL like http://otherhost:5002/fetchmodule?moduleId=waterservice.external.js. I use SystemJS.import(url) to do this and it works fine.
But when this external module wants to import another module with a simple import { OtherClass } from './other-class'; this (comprehensiblely) does not work. When loaded by the SPA it looks at http://spa:5000/app/other-class.js. In this case I have to intercept the path/location to redirect it to http://otherhost:5002/fetchmodule?moduleId=other-class.js.
Note: The Typescript compilation for waterservice/external.ts works find because the typescript compiler can find ./other-class.ts easily. Obviously I cannot use an absolute URL for the import.
How can I intercept the module loading inside a module I am importing with SystemJS?
One approach I already tested is to add a mapping in the SystemJS configuration. If I import it like import { OtherClass } from 'other-class'; and add a mapping like "other-class": "http://otherhost:5002/fetchmodule?moduleId=other-class" it works. But if this approach is good, how can I add mapping dynamically at runtime?
Other approaches like a generic load url interception are welcome too.
Update
I tried to intercept SystemJS as suggest by artem like this
var systemLoader = SystemJS;
var defaultNormalize = systemLoader.normalize;
systemLoader.normalize = function(name, parentName) {
console.error("Intercepting", name, parentName);
return defaultNormalize(name, parentName);
}
This would normally not change anything but produce some console output to see what is going on. Unfortunately this seems to do change something as I get an error Uncaught (in promise) TypeError: this.has is not a function inside system.js.
Then I tried to add mappings with SystemJS.config({map: ...});. Surprisingly this function works incremental, so when I call it, it does not loose the already provided mappings. So I can do:
System.config({map: {
"other-class": `http://otherhost:5002/fetchModule?moduleId=other-class.js`
}});
This does not work with relative paths (those which start with . or ..) but if I put the shared ones in the root this works out.
I would still prefer to intercept the loading to be able to handle more scenarios but at the moment I have no idea which has function is missing in the above approach.
how can I add mapping dynamically at runtime?
AFAIK SystemJS can be configured at any time just by calling
SystemJS.config({ map: { additional-mappings-here ... }});
If it does not work for you, you can override loader.normalize and add your own mapping from module ids to URLs there. Something along these lines:
// assuming you have one global SystemJS instance
var loader = SystemJS;
var defaultNormalize = loader.normalize;
loader.normalize = function(name, parentName) {
if (parentName == 'your-external-module' && name == 'your-external-submodule') {
return Promise.resolve('your-submodule-url');
} else {
return defaultNormalize.call(loader, name, parentName);
}
}
I have no idea if this will work with typescript or not. Also, you will have to figure out what names exactly are passed to loader.normalize in your case.
Also, if you use systemjs builder to bundle your code, you will need to add that override to the loader used by builder (and that's whole another story).
So I've just updated to webpack 2 and have my first working setup where webpack automatically creates chunks by looking at System.import calls. Pretty sweet!
However, I load the initial chunk with an ajax call so that I can show the progress while loading
So my question is, can I overwrite or change the function of System.import somehow so that it will use an ajax request that I can listen to for events, instead of loading the chunk with a <script> tag?
No, unfortunately not. webpack 2 translates System.import() to ordinary require.ensure() calls which just uses the <script> tag. Even the official WHATWG Loader Spec does not provide an API for this kind of event. I've created an issue for this question.
Regarding webpack: There is a way to implement your own require.ensure(). However, since chunk loading is an integral part of webpack, this requires to dive a little deeper. I'm not sure how important this is for you, but you might be interested how things work inside webpack, so let's take a look:
In webpack, all internal features are implemented as plugins. This way, webpack is able to support a lot of different features and environments. So, if you're interested how things are implemented in webpack, it's always a good idea to a) take a look at WebpackOptionsApply or b) search for a specific string/code snippet.
Chunk loading depends heavily on the given target, because you need different implementations for each environment. Webpack allows you to define custom targets. When you pass in a function instead of a string, webpack invokes the function with a compiler instance. There you can apply all the required plugins. Since our custom target is almost like the web target, we just copy all the stuff from the web target:
// webpack.config.js
const NodeSourcePlugin = require("webpack/lib/node/NodeSourcePlugin");
const FunctionModulePlugin = require("webpack/lib/FunctionModulePlugin");
const LoaderTargetPlugin = require("webpack/lib/LoaderTargetPlugin");
const JsonpChunkTemplatePlugin = require("webpack/lib/JsonpChunkTemplatePlugin");
const JsonpHotUpdateChunkTemplatePlugin = require("webpack/lib/JsonpHotUpdateChunkTemplatePlugin");
function customTarget(compiler) {
compiler.apply(
new JsonpTemplatePlugin(compiler.options.output),
new FunctionModulePlugin(compiler.options.output),
new NodeSourcePlugin(compiler.options.node),
new LoaderTargetPlugin("web")
);
}
module.exports = {
entry: require.resolve("./app/main.js"),
output: {
path: path.resolve(__dirname, "dist"),
filename: "bundle.js"
},
target: customTarget
};
If you take a look at each plugin, you will recognize that the JsonpTemplatePlugin is responsible for loading chunks. So let's replace that with out own implementation. We call it the XHRTemplatePlugin:
function customTarget(compiler) {
compiler.apply(
new XHRTemplatePlugin(compiler.options.output),
new FunctionModulePlugin(compiler.options.output),
new NodeSourcePlugin(compiler.options.node),
new LoaderTargetPlugin("my-custom-target")
);
}
Our XHRTemplatePlugin is responsible for providing the code in the main chunk, in each child chunk and for hot updates:
function XHRTemplatePlugin() {}
XHRTemplatePlugin.prototype.apply = function (compiler) {
compiler.plugin("this-compilation", function(compilation) {
compilation.mainTemplate.apply(new XHRMainTemplatePlugin());
compilation.chunkTemplate.apply(new XHRChunkTemplatePlugin());
compilation.hotUpdateChunkTemplate.apply(new XHRHotUpdateChunkTemplatePlugin());
});
};
Maybe, you can also re-use the JsonpChunkTemplatePlugin and JsonpHotUpdateChunkTemplatePlugin plugin, but this depends on your use-case/implementation.
Your XHRMainTemplatePlugin now may look like this:
function XHRMainTemplatePlugin() {}
XHRMainTemplatePlugin.prototype.apply = function (mainTemplate) {
mainTemplate.plugin("require-ensure", function(_, chunk, hash) {
return this.asString([
// Add your custom implementation here
"fetch()"
]);
});
};
I won't go any further here because I think this answer is already long enough. But I recommend to create a real small example project and to check the output created by webpack. The internal webpack plugins may look a little bit scary on first sight, but most of them are real short and do just one thing. You can also get some inspiration from them.
I'm running a number of grunt tasks on a project. One of which sets a number of grunt.options grunt.option(key, value) which I need to access in a subsequent task var option = grunt.option(key). These options are returning undefined when I try to access them in the latter task.
If I log the variable at the head of the latter task, it is shown before that task is run and I am unable to access the previously set value in the tasks config.
Is there something I need to do between setting the grunt.option and using it in another task to notify grunt of the change? Am I doing something inherently wrong here? Or is there a better way to do this with a global variable of sorts (my research pointed me to using grunt.option)
My Gruntfile.js
grunt.log.writeln('loading tasks');
grunt.loadTasks('grunttasks');
grunt.log.writeln('tasks loaded');
grunt.registerTask(
'jenkins',[
'clean', //clears out any existing build folders
'curl', //gets build config file from remote server
'set-env', //sets the grunt.options based on the build config file
'string-replace:config', //attempts to access the new grunt.options
....
....
....
....
]
);
In my set-env task, I set some environment variables based on the contents of a text file returned in the curl task. This works fine and I can log all the grunt.options immediately after setting them so I know they are being set correctly.
set-env-task
module.exports = function(grunt) {
grunt.registerTask('set-env', 'set-env', function() {
......
......
for (var i = 0; i < propFile.length; i++) {
if (propFile[i] !== '') {
......
......
keyValues[propName] = propValue;
grunt.option(propName, propValue);
console.log("FROM GRUNT.OPTION " + grunt.option(propName));
......
......
}
}
......
......
});
};
When I try and access the grunt.options set in the above task from my string-replace (or any other subsequent) task undefined is returned. If I set test values to these grunt.options at the start of my Gruntfile.js I can access them with no issue:
module.exports = function(grunt) {
grunt.config('string-replace', {
..........
..........
config:{
files: configFiles,
options: {
replacements: [
..........
..........
{
pattern: /var _OPTION_KEY = \"(.*?)\"\;/ig,
replacement: 'var _OPTION_KEY = "'+grunt.option('_OPTION_KEY')+'";' //grunt.option('_OPTION_KEY') here is undefined
}
..........
..........
]
}
}
..........
..........
});
grunt.loadNpmTasks('grunt-string-replace');
}
(I have double, triple and quadruple checked that I'm using the correct option keys)
The problem is that you're accessing the variables from the grunt option set during the "config stage" of the task, which runs one time, before you set the options in your set-env task. Evaluating the custom option key at that point in the code indeed should yield undefined. (Note that this is practically the equivalent of using the initConfig block)
What you an do instead is instead of reading the option values from the options object, modify the config object of the task directly, using grunt.config.set, which would enable you to do what you've been trying.
So basically, instead of
grunt.option(propName, propValue);
use something like
grunt.config.set('mytask.replacements', options.replacements);
(Of course, this will require a sizable reworking of your code, I don't get into that.)
edit: probably there's an even cleaner solution using the templating functionality of grunt, see this stackoverflow answer, and the grunt api docs on templating:
Template strings can be processed manually using the provided template functions. In addition, the config.get method (used by many tasks) automatically expands <% %> style template strings specified as config data inside the Gruntfile.
The point being that these are evaluated not when the config block is parsed, but only when the task reads the values using config.get.
Your pattern of using the options object to share values between tasks works better if it's between two custom tasks of yours - you can set it in one task, read it in the other, not in the configuration, but as an actual step of running the tasks.
In general, although it seems doable, I'd say this is not the workflow grunt has in mind - if you know which environment you're running grunt in, it's easier to pass in the environment parameters through the options command-line flag directly when you run a grunt task, which would already take effect in any task configuration you're doing.