I use the experimentalCodeSplitting: true feature of rollup 0.61.2 to get nice code splitting. Because my project consists also of assets I created a plugin which copies and minifies the asset files accordingly. The problem is that the hooks I used are called for every chunk which is created. Therefore the assets are copied and minified multiple time. The only workaround I found is, to create some flag which is set to true after everything is done correctly. Is there a functionality to call a rollup hook after everything (or before everything) is finished and not on every chunk? Now my plugin looks something like the following code (I removed some parts and simplified for readability):
export default function copy(userOptions = {}) {
const name = 'copyAndMinify';
const files = userOptions.files || [];
let isCopyDone = false;
return {
name: name,
// also tried onwrite, ongenerate, buildEnd and generateBundle
buildStart() {
if (isCopyDone) {
return;
}
for (let key in files) {
const src = key;
const dest = files[key];
try {
minifyFile(src, dest);
} catch (err) {
fatal(name, src, dest, err);
}
}
isCopyDone = true;
}
};
};
Maybe there is a better way of doing this kind of stuff because with this implementation I always have to completely restart rollup to execute my plugin
The rollup site lists all the available plugin hooks.
generateBundle seems like what you'd want.
generateBundle (formerly onwrite and ongenerate) - a ( outputOptions, bundle, isWrite ) => void function hook called when bundle.generate() or bundle.write() is being executed; you can also return a Promise. bundle provides the full list of files being written or generated along with their details.
Related
I'm composing a small library (for the first time) that utilizes the Resize Observer API. However, I want to include the polyfill for browsers that don't support it, and only load it into the final build if it's required, though I'm not sure where to place the API check/module load, or if this method is the right way to do it.
Say I have my index file which is the main file exported with the library, should I add this statement outside the function definition like so:
// index.js
(async () => {
if (!ResizeObserver in window) {
const module = await import('resize-observer-polyfill');
window.ResizeObserver = module.ResizeObserver
}
}();
// main function exported
export default () => {
function loadResizeObserver() {
const ro = new ResizeObserver((entries) => {
// ...
})
}
}
Is there a better way to compose something like this? How will this affect the build and what's included if a consumer installs the package (with and without native RO support?)
First time publishing and realized I know nothing about it.
I am building an Atom Electron app. Right now I have this in the preload.js of one of my webviews:
var { requireTaskPool } = require('electron-remote');
var work = '';
var _ = require('lodash');
work = requireTaskPool(require.resolve('./local/path/to/js/file.js'));
function scriptRun() {
console.log('Preload: Script Started');
// `work` will get executed concurrently in separate background processes
// and resolve with a promise
_.times(1, () => {
work(currentTab).then(result => {
console.log(`Script stopped. Total time running was ${result} ms`);
});
});
}
module.exports = scriptRun;
scriptRun();
It gets a local script and then executes it in a background process.
I want to do the same exact thing, except I want to retrieve the script from an external source like so
work = requireTaskPool(require.resolve('https://ex.com/path/to/js/file.js'));
When I do this, I get errors like:
Uncaught Error: Cannot find module 'https://ex.com/path/to/js/file.js'
How can I load external scripts? And then use the loaded scripts with my work function. My feeling is that require only works with local files. If AJAX is the answer, can I see an example of how to get a script, then pass it into my work without executing it prior?
I was able to load a remote js file and execute the function defined in it, hopefully it will provide you enough to start with...
my remote dummy.js, available online somewhere:
const dummy = () => console.log('dummy works');
my download.js:
const vm = require("vm");
const rp = require('request-promise');
module.exports = {
downloadModule: async () => {
try {
let body = await rp('http://somewhere.online/dummy.js');
let script = vm.createScript(body);
script.runInThisContext();
// this is the actual dummy method loaded from remote dummy.js
// now available in this context:
return dummy;
} catch (err) {
console.log('err', err);
}
return null;
}
};
You need to add the request-promise package.
Then in my main.js I use it like this:
const {downloadModule} = require('./download');
downloadModule().then((dummy) => {
if (dummy) dummy();
else console.log('no dummy');
});
When I run it, this is what I get:
$ electron .
dummy works
I wanted to create an actual module and require it, but I have not had the time to play with this further. If I accomplish that I will add it here.
You have not provided any details on your file.js. But I can give you the general idea.
There are two things that you need at minimum to call your package a module:
file.js (of course you have it) and
package.json
The structure of your file.js should be something like this:
//load your dependencies here
var something = require("something");
//module.exports is necessary to export your code,
//so that you can fetch this code in another file by using require.
module.exports = function() {
abc: function(){
//code for abc function
},
xyz: function(){
//code for xyz function
}
}
Now if you put your package on any website, you can access it as:
npm install https://ex.com/path/to/js/file.js
Now, a copy of your package will be put into node-modules folder.
So, now you can access it as:
var x = require('name-of-your-package-in-node-modules');
Now, you can also do:
var abc = require('name-of-your-package-in-node-modules').abc;
or
var xyz = require('name-of-your-package-in-node-modules').xyz;
I'm struggling to come up with a pattern that will satisfy both my tests and ability for Travis to run my script.
I'll start off by saying that the way I have Travis running my script is that I specify the script to be run via node-babel command in my travis.yml as so:
script:
- babel-node ./src/client/deploy/deploy-feature-branch.js
That means when babel-node runs this, I need a method to auto run in deploy-feature-branch.js which I have. That's the line let { failure, success, payload } = deployFeatureBranch(). That forces deployFeatureBranch() to run because it's set to a destructure command.
In there I also have an options object:
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
During a PR build, travis automatically sets the value for process.env.TRAVIS_PULL_REQUEST_BRANCH. That's great! However the way I've set up this module doesn't work so well for tests. The problem I have is that if I try to set options from my test, for some reason the options object isn't being set.
I guess the problem I want to address is first and foremost, why options isn't being set when I try to set them from my test. And then is there a better way to design this module overall.
Test
import {options, deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
options.branch = 'feature-100'
options.domain = 'ourdomain'
options.localDeployFolder = 'build'
const result = await deployFeatureBranch()
expect(result.success).to.be.true
})
})
When deployFeatureBranch() runs above in my test, the implementation of
tries to reference options.branch but it ends up being undefined even though I set it to be 'feature-100'. branch is defaulted to process.env.TRAVIS_PULL_REQUEST_BRANCH but I want to be able to override that and set it from tests.
deploy-feature-branch.js
import * as deployApi from './deployApi'
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
const deployFeatureBranch = async (options) => {
console.log(green(`Deploying feature branch: ${options.branch}`))
let { failure, success, payload } = await deployApi.run(options)
return { failure, success, payload }
}
let { failure, success, payload } = deployFeatureBranch(options)
export {
options,
deployFeatureBranch
}
I can't really think of a better way to structure this and also to resolve the setting options issue. I'm also not limited to using Node Modules either, I would be fine with ES6 exports too.
Instead of exporting options and modifying it, just pass in your new options object when calling the function in your test:
import {deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
const options = {
branch: 'feature-100',
domain: 'ourdomain',
localDeployFolder: 'build'
};
const result = await deployFeatureBranch(options)
expect(result.success).to.be.true
})
});
The reason it isn't working is because your deployFeatureBranch() function expects options to be passed in when you call it, which you aren't doing.
Also, exporting and changing an object, while it might work, is also really weird and should be avoided. Creating a new object (or cloning the exported object) is definitely the way to go.
I'm upgrading from Gulp 3 to 4, and I'm running into an error:
The following tasks did not complete: build
Did you forget to signal async completion?
I understand what it's saying, but can't understand why this code is triggering it.
Error or not, the task completes (the files are concatenated and written to dest). Executing the same code without lazypipe results in no error, and removing the concatenation within lazypipe also fixes the error.
Wrapping the whole thing in something that creates a stream (like merge-stream) fixes the issue. I guess something about the interaction between gulp-concat and lazypipe is preventing a stream from being correctly returned.
Here's the (simplified) task:
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js') // Task will complete if I remove this
.pipe(gulp.dest, dest);
// This works
// return gulp.src(src('js/**/*.js'))
// .pipe(plugins.concat('cat.js'))
// .pipe(gulp.dest(dest));
// This doesn't (unless you wrap it in a stream-making function)
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles());
});
Any advice appreciated!
This is a known issue when using lazypipe with gulp 4 and it's not going to be fixed in the near future. Quote from that issue:
OverZealous commented on 20 Dec 2015
As of now, I have no intention of making lazypipe work on Gulp 4.
As far as I can tell this issue is caused by the fact that gulp 4 uses async-done which has this to say about its stream support:
Note: Only actual streams are supported, not faux-streams; Therefore, modules like event-stream are not supported.
When you use lazypipe() as the last pipe what you get is a stream that doesn't have a lot of the properties that you usually have when working with streams in gulp. You can see this for yourself by logging the streams:
// console output shows lots of properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(plugins.concat('cat.js'))
.pipe(gulp.dest(dest)));
// console output shows much fewer properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(buildFiles()));
This is probably the reason why gulp considers the second stream to be a "faux-stream" and doesn't properly detect when the stream has finished.
Your only option at this point is some kind of workaround. The easiest workaround (which doesn't require any additional packages) is to just add a callback function cb to your task and listen for the 'end' event:
gulp.task('build', function(cb) {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.on('end', cb);
});
Alternatively, adding any .pipe() after buildFiles() should fix this, even one that doesn't actually do anything like gutil.noop():
var gutil = require('gulp-util');
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.pipe(gutil.noop());
});
So the error is clear. I had to do some refactoring to make things work again for gulp 4. I ended up making some extra methods that take a source and destination and perform the tasks previously done by my lazypipe implementation.
I have to say I don't miss lazypipe now. It's just a different approach. I did end up with some extra tasks but they use a standard method like in the example below:
// previously a lazypipe, now just a method to return from a gulp4 task
const _processJS = (sources, destination) => {
return src(sources)
.pipe(minify(...))
.pipe(uglify(...))
.pipe(obfuscate(...))
.pipe(whatever())
.pipe(dest(destination));
};
const jsTaskXStep1 = ()=>{
return src(...).pipe(...).pipe(...).pipe(dest(...));
};
const jsTaskXStep2 = ()=>{
return _processJS(['./src/js/x/**/*.js'], './dist/js');
};
const jsTaskYStep1 = ()=>{
return src(...).pipe(...).pipe(...).pipe(dest(...));
};
const jsTaskYStep2 = ()=>{
return _processJS(['./src/js/y/**/*.js'], './dist/js');
};
const jsTaskX = series(jsTaskXStep1, jsTaskXStep2);
const jsTaskY = series(jsTaskYStep1, jsTaskYStep2);
module.exports = {
js: parallel(jsTaskX, jsTaskY),
css: ...,
widgets: ...,
...
default: parallel(js, css, widgets, series(...), ...);
}
So basically you can put your lazypipe stuff in methods like _processJS in this example. And then create tasks that use it and combine everything with gulp series and parallel. Hope this helps out some of you who are strugling with this.
Regarding an issue I am having with gulp-tag-version, the README there suggests:
function inc(importance) {
// get all the files to bump version in
return gulp.src(['./package.json', './bower.json'])
// bump the version number in those files
.pipe(bump({type: importance}))
// save it back to filesystem
.pipe(gulp.dest('./'))
/* Recompile the Javascript here */
// commit the changed version number
.pipe(git.commit('bumps package version'))
// read only one file to get the version number
.pipe(filter('package.json'))
// **tag it in the repository**
.pipe(tag_version())
}
gulp.task('patch', function() { return inc('patch'); })
gulp.task('feature', function() { return inc('minor'); })
gulp.task('release', function() { return inc('major'); })
I would like to recompile some Javascript between the version being bumped and git tagging it. I have a task for that, js, but it is not clear how one can call the js task (or otherwise re-order the bump/tag tasks) to accomplish this common and desirable outcome (i.e. having the version in the headers of your compiled code).
It is also worth noting that if one had three tasks bump, compile and tag, the bump of the package.json appears to be cached and not re-read by a separate tag task.
One simple way to get this working is to get 3 new separate tasks that depend on each others:
tag depends on commit, since the tag will be applied to the last commit we have to do that before
commit depends on js, because we should re-build your files with the bumped version
js depends on bump, bump the version of your manifest files before doing anything
bump has no dependencies.
Problem with this is that you will have to change your js to add the bump dependency, and I'm sure that you don't want to bump each time you recompile or one of your watch is triggered.
So you could use something like run-sequence to bypass that.
Also one thing I like to reduce the number of gulp tasks is to use arguments, but you can stick with the inc function if you want, that's a detail (here --major, --minor, or --patch).
So you could do something like this (from top to down in the execution order):
gulp.task('uprev', function () {
return gulp.src(['./package.json', './bower.json'])
.pipe(bump({ type: process.argv[3] ? process.argv[3].substr(2) : 'patch' }))
.pipe(gulp.dest('./'));
});
gulp.task('rebuild', function (cb) {
runSequence('uprev', 'js', cb); //uprev will here be executed before js
});
gulp.task('commit', ['rebuild'], function () {
return gulp.src(['./package.json', './bower.json', 'dist/**/*'])
.pipe(git.add())
.pipe(git.commit('bump version'));
});
gulp.task('bump', ['commit'], function () {
return gulp.src('package.json')
.pipe(tagVersion());
});
For your commit, you may want to add both of your manifest and the compiled files, I've added all the content of a random dist folder for demo.
I've intentionally cut the bump and commit tasks so you don't have to use gulp-filter, which seems to me pretty useless for this little kind of thing, but it's as you want of course.
One last thing (promise), you could avoid to use gulp-tag-version by using a simple node fs call combined with gulp-git you already have:
gulp.task('bump', ['commit'], function (cb) {
fs.readFile('./package.json', function (err, data) {
if (err) { return cb(err); }
var version = JSON.parse(data.toString()).version;
git.tag(version, 'Version message', function (err) {
cb(err);
});
});