I'm a JavaScript developer and fairly new to creating a build process from scratch. I chose to use Grunt for my current project and have created a GruntFile that does about 90% of what I need it to do and it works great, except for this one issue. I have several JavaScript files that I reference while I'm developing a chrome extension in the manifest.json file. For my build process I am concatenating all of these files and minifying it into one file to be included in manifest.json. Is there anyway to update the file references in the manifest.json file during the build process so it points to the minified version?
Here is a snippet of the src manifest file:
{
"content_scripts": [{
"matches": [
"http://*/*"
],
"js": [
"js/lib/zepto.js",
"js/injection.js",
"js/plugins/plugin1.js",
"js/plugins/plugin2.js",
"js/plugins/plugin3.js",
"js/injection-init.js"
]
}],
"version": "2.0",
}
I have a grunt task that concatenates and minifies all the js files listed above into one file called injection.js and would like a grunt task that can modify the manifest file so it looks like this:
{
"content_scripts": [{
"matches": [
"http://*/*"
],
"js": [
"js/injection.js"
]
}],
"version": "2.0",
}
What I've done for now is have 2 versions of the manifest file, one for dev and one for build, during the build process it copies the build version instead. This means I need to maintain 2 versions which I'd rather not do. Is there anyway to do this more elegantly with Grunt?
Grunt gives its own api for reading and writing files, i feel that better than other dependencies like fs:
Edit/update json file using grunt with command grunt updatejson:key:value after putting this task in your gruntjs file
grunt.registerTask('updatejson', function (key, value) {
var projectFile = "path/to/json/file";
if (!grunt.file.exists(projectFile)) {
grunt.log.error("file " + projectFile + " not found");
return true;//return false to abort the execution
}
var project = grunt.file.readJSON(projectFile);//get file as json object
project[key]= value;//edit the value of json object, you can also use projec.key if you know what you are updating
grunt.file.write(projectFile, JSON.stringify(project, null, 2));//serialize it back to file
});
I do something similar - you can load your manifest, update the contents then serialize it out again. Something like:
grunt.registerTask('fixmanifest', function() {
var tmpPkg = require('./path/to/manifest/manifest.json');
tmpPkg.foo = "bar";
fs.writeFileSync('./new/path/to/manifest.json', JSON.stringify(tmpPkg,null,2));
});
I disagree with the other answers here.
1) Why use grunt.file.write instead of fs? grunt.file.write is just a wrapper for fs.writeFilySync (see code here).
2) Why use fs.writeFileSync when grunt makes it really easy to do stuff asynchronously? There's no doubt that you don't need async in a build process, but if it's easy to do, why wouldn't you? (It is, in fact, only a couple characters longer than the writeFileSync implementation.)
I'd suggest the following:
var fs = require('fs');
grunt.registerTask('writeManifest', 'Updates the project manifest', function() {
var manifest = require('./path/to/manifest'); // .json not necessary with require
manifest.fileReference = '/new/file/location';
// Calling this.async() returns an async callback and tells grunt that your
// task is asynchronous, and that it should wait till the callback is called
fs.writeFile('./path/to/manifest.json', JSON.stringify(manifest, null, 2), this.async());
// Note that "require" loads files relative to __dirname, while fs
// is relative to process.cwd(). It's easy to get burned by that.
});
Related
I have a hybrid AngularJS/Angular application that will take some time to complete migration to fully be an Angular app. While this process occurs, I'd like to move away from the previous build system to using the CLI and webpack to manage all of the old AngularJS scripts as well. This is possible as I've done it before by adding all of my scripts to the scripts section in angular.json like the following:
"scripts": [
"src/app/angularjs/app.js",
"src/app/angularjs/controllers/main.js",
"src/app/angularjs/services/someService.js",
"src/app/angularjs/controllers/someController.js"
],
This works well and the CLI builds via ng serve and ng build continue to work for the hybrid bootstrapped app as needed. The problem I'm running into now is manually listing each file for the current application I'm migrating is not ideal. I have hundreds of scripts that need to be added, and what I need is to be able to use a globbing pattern like the following:
"scripts": [
"src/app/angularjs/**/*.js"
],
The problem is this syntax from what I can tell is not supported. The glob pattern is supported in the assets section of angular.json as stated here but not in the scripts section: https://angular.io/guide/workspace-config#assets-configuration
In the scripts section I can't find a similar solution. It does have an expanded object API, but nothing that solves the problem I can tell to select all .js files from a particular directory as listed here: https://angular.io/guide/workspace-config#styles-and-scripts-configuration
Is it possible by some means to use a glob pattern or similar approach to select all files of a directory for the scripts section in angular.json so I don't have to manually list out hundreds of individual .js files?
The Bad News
The scripts section does not support the same glob patterns that the assets section does.
The Good News(?)
Since you're transitioning away from AngularJS, you hopefully won't have any new files to import in the future, so you could just generate the list of all the files you need to import.
Make your way to the src/app/angular directory and run the following:
find . -iregex '.*\.\(js\)' -printf '"%p",\n'
That will give you your list, already quoted for your convenience. You may need to do a quick search/replace (changing "." to "src/app/angularjs"), and don't forget to remove the last comma, but once you've done that once you should be all set.
The Extra News
You can further filter out unwanted files with -not, so (per your comment) you might do:
find . -iregex '^.*\.js$' -not -iregex '^.*_test\.js$' -printf '"%p",\n'
And that should give you all your .js files without your _test.js files.
KISS
Of course, this isn't a complex pattern, so as #atconway points out below, this will work just as well:
find . -iname "*.js" -not -iname "*_test.js" -printf '"%p",\n'
I'll keep the above, though, for use in situations where the full power of regex might come in handy.
I wanted to extend an anser of #JasonVerber and here is a Node.JS code and therefore (I believe) cross-platform.
Firstly install find package and then save contents from the snippet in some file.js.
Afterwards, specify paths so that they resolve to where you wan't to get your files from and where to put the resulting file to.
After that node file-name.js and this will save all found file paths to the resultPath in result.txt ready to Ctrl+A, Ctrl+C, Ctrl+V.
const find = require('find');
const path = require('path');
const fs = require('fs');
// BEFORE USAGE INSTALL `find` package
// Path to the folder where to look for files
const sourcePath = path.resolve(path.join(__dirname, 'cordova-app', 'src'));
// Path that will be removed from absolute path to files
const pathToRemove = path.resolve(path.join(__dirname, 'cordova-app'));
// Path where to put result.txt
const resultPath = path.resolve(path.join(__dirname, './result.txt'));
// Collects the file paths
const res = [];
// Path with replaced \ onto /
const pathToRemovehReplaced = pathToRemove.replace(/\\/g, '/');
// Get all fils that match a regex
find.eachfile(/\.js$/, sourcePath, file => {
// First remove all \ with / and then remove the path from root to source so that only relative path is left
const fileReplaced = file.replace(/\\/g, '/').replace(`${pathToRemovehReplaced}/`, '');
// Surround with quoutes
res.push(`"${fileReplaced}"`);
}).end(() => {
// Write file and concatenate results with newline and commas
fs.writeFileSync(resultPath, res.join(',\r\n'), 'utf8');
console.log('DONE!');
});
The result I got while testing (/\.ts$/ for regex)
"src/app/app.component.spec.ts",
"src/app/app.component.ts",
"src/app/app.module.ts",
"src/environments/environment.prod.ts",
"src/environments/environment.ts",
"src/main.ts",
"src/polyfills.ts",
"src/test.ts"
I use the FayeJS and the latest version has been modified to use RequireJS, so there is no longer a single file to link into the browser. Instead the structure is as follows:
/adapters
/engines
/mixins
/protocol
/transport
/util
faye_browser.js
I am using the following nodejs build script to try and end up with all the above minified into a single file:
var fs = require('fs-extra'),
requirejs = require('requirejs');
var config = {
baseUrl: 'htdocs/js/dev/faye/'
,name: 'faye_browser'
, out: 'htdocs/js/dev/faye/dist/faye.min.js'
, paths: {
dist: "empty:"
}
,findNestedDependencies: true
};
requirejs.optimize(config, function (buildResponse) {
//buildResponse is just a text output of the modules
//included. Load the built file for the contents.
//Use config.out to get the optimized file contents.
var contents = fs.readFileSync(config.out, 'utf8');
}, function (err) {
//optimization err callback
console.log(err);
});
The content of faye_browser.js is:
'use strict';
var constants = require('./util/constants'),
Logging = require('./mixins/logging');
var Faye = {
VERSION: constants.VERSION,
Client: require('./protocol/client'),
Scheduler: require('./protocol/scheduler')
};
Logging.wrapper = Faye;
module.exports = Faye;
As I under stand it the optimizer should pull in the required files, and then if those files have required files, it should pull in those etc..., and and output a single minified faye.min.js that contains the whole lot, refactored so no additional serverside calls are necessary.
What happens is faye.min.js gets created, but it only contains the content of faye_browser.js, none of the other required files are included.
I have searched all over the web, and looked at a heap of different examples and none of them work for me.
What am I doing wrong here?
For anyone else trying to do this, I mist that on the download page it says:
The Node.js version is available through npm. This package contains a
copy of the browser client, which is served up by the Faye server when
running.
So to get it you have to pull down the code via NPM and then go into the NPM install dir and it is in the "client" dir...
I want to know how I can verify if a file was downloaded using Selenium Webdriver after I click the download button.
Your question doesn't say whether you want to confirm it locally or remotely(like browserstack) . If it is remotely then my answer will be "NO" as you can see that the file is getting downloaded but you can not access the folder. So you wont be able to assert that the file has been downloaded.
If you want to achieve this locally(in Chrome) then the answer is "YES", you can do it something like this:
In wdio.conf.js(To know where it is getting downloaded)
var path = require('path');
const pathToDownload = path.resolve('chromeDownloads');
// chromeDownloads above is the name of the folder in the root directory
exports.config = {
capabilities: [{
maxInstances: 1,
browserName: 'chrome',
os: 'Windows',
chromeOptions: {
args: [
'user-data-dir=./chrome/user-data',
],
prefs: {
"download.default_directory": pathToDownload,
}
}
}],
And your spec file(To check if the file is downloaded or not ?)
const fsExtra = require('fs-extra');
const pathToChromeDownloads = './chromeDownloads';
describe('User can download and verify a file', () =>{
before(() => {
// Clean up the chromeDownloads folder and create a fresh one
fsExtra.removeSync(pathToChromeDownloads);
fsExtra.mkdirsSync(pathToChromeDownloads);
});
it('Download the file', () =>{
// Code to download
});
it('Verify the file is downloaded', () =>{
// Code to verify
// Get the name of file and assert it with the expected name
});
});
more about fs-extra : https://www.npmjs.com/package/fs-extra
Hope this helps.
TL;DR: Unless your web-app has some kind of visual/GUI trigger once the download finishes (some text, an image/icon-font, push-notification, etc.), then the answer is a resounding NO.
Webdriver can't go outside the scope of your browser, but your underlying framework can. Especially if you're using NodeJS. :)
Off the top of my head I can think of a few ways I've been able to do this in the past. Choose as applicable:
1. Verify if the file has been downloaded using Node's File System (aka fs)
Since you're running WebdriverIO, under a NodeJS environment, then you can make use its powerful lib tool-suite. I would use fs.exists, or fs.existsSync to verify if the file is in the expected folder.
If you want to be diligent, then also use fs.statSync in conjunction with fs.exists & poll the file until it has the expected size (e.g.: > 2560 bytes)
There are multiple examples online that can help you put together such a script. Use the fs documentation, but other resources as well. Lastly, you can add said script inside your it/describe statement (I remember your were using Mocha).
2. Use child_process's exec command to launch third-party scripts
Though this method requires more work to setup, I find it more relevant on the long run.
!!! Caution: Apart from launching the script, you need to write a script in a third-party framework.
Using an AutoIT script;
Using a Sikuli script;
Using a TestComplete (not linking it, I don't like it that much), or [insert GUI verification script here] script;
Note: All the above frameworks can generate an .exe file that you can trigger from your WebdriverIO test-cases in order to check if your file has been downloaded, or not.
Steps to take:
create one of the stand-alone scripts like mentioned above;
place the script's .exe file inside your project in a known folder;
use child_process.exec to launch the script and assert its result after it finishes its execution;
Example:
exec = require('child_process').exec;
// Make sure you also remove the .exe from scriptName
var yourScript = pathToScript + scriptName;
var child = exec(yourScript);
child.on('close', function (code, signal) {
if (code!==0) {
callback.fail(online.online[module][code]);
} else {
callback();
}
});
Finally: I'm sure there are other ways to do it. But, your main take-away from such a vague question should be: YES, you can verify if the file has been downloaded if you absolutely must, expecially if this test-case is CRITICAL to your regression-run.
While seemingly the tasks execute in proper order (bump first and than ngconstant creating a config file based on package.json-s version property) i think they actually execute parallely, and ngconstant reads up the package.json before bump has written it.
Running "bump" task
md
>> Version bumped to 2.0.6 (in package.json)
Running "ngconstant:production" (ngconstant) task
Creating module config at app/scripts/config.js...OK
The resultung package.json has 2.0.6 as version while config.js has 2.0.5.
My ngconstant config simply uses
grunt.file.readJSON('package.json')
to read up the json.
So, basically the question is, how can i make sure that bump's write is finished, before reading up the json with ngconstant, and what actually causes the above?
EDIT: the original Gruntfile: https://github.com/dekztah/sc2/blob/18acaff22ab027000026311ac8215a51846786b8/Gruntfile.js
EDIT: the updated Gruntfile that solves the problem: https://github.com/dekztah/sc2/blob/e7985db6b95846c025ba0b615bf239c4f9c11e8f/Gruntfile.js
Probably your package.json file is stored in memory and is not updated before your run the next task.
An workaround would be to create a script in your file package.json as:
"scripts": {
"bumb-and-ngconstant": "grunt:bump && grunt:build"
}
As per grunt-ng-constant documentation:
Or if you want to calculate the constants value at runtime you can create a lazy evaluated method which should be used if you generate your json file during the build process.
grunt.initConfig({
ngconstant: {
options: {
dest: 'dist/module.js',
name: 'someModule'
},
dist: {
constants: function () {
return {
lazyConfig: grunt.file.readJSON('build/lazy-config.json')
};
}
}
},
})
This forces the json to be read while the task runs, instead of when grunt inits the ngconstant task.
I'd like to use grunt-contrib-concat for application frontend HTML templating purposes and this would be useful for me.
I'd like to define page partials and and concatenate them inside an output file that is going to be compiled by handlebars.
I've got everything set up, however Concat doesn't allow me to use the same file more than once.
Basically concat is filtering the sources so they don't occur more than once. The second partial1.hbs will not be concatenated.
pageconcat: {
src: [
'app/templates/partial1.hbs',
'app/templates/partial2.hbs',
'app/templates/partial1.hbs'
],
dest: 'app/result.hbs'
}
Is there any way to do this?
Update 1
After playing around with grunt's console output function, I was able to debug (of some sort) the concat plugin. Here's what I found out: The input array is deduplicated by grunt for some reason.
Update 2
The deduplication occurs in the foreach file loop that grunt uses. I've managed to bypass that (see answer). I do not know how reliable my solution is but it's a workaround and it works well if you don't put the wrong input.
You may be able to use the file array format to set up two different source sets. Something like this:
{
"files": [{
"src": [
"app/templates/partial1.hbs",
"app/templates/partial2.hbs"
],
"dest": "app/result.hbs"
}, {
"src": [
"app/result.hbs",
"app/templates/partial1.hbs"
],
"dest": "app/result.hbs"
}]
}
added "app/result.hbs" to second source set, as was pointed out in the comments.
Thanks.
Solution
After some debugging I came up with a solution. Certainly not the best, but it works fine, as it should.
I edited the concat.js plugin file inside the node_modules folder the following way:
grunt.registerMultiTask('concat', ...){
var self = this;
//several lines of code
//...
//replace f.src.filter(..) wtih
self.data.src.filter(..);
}