EDIT: As I wrote IN THE TITLE AND FROM THE BEGINNING, this is not about command-line parameters and is thus NOT A DUPLICATE. //EDIT
I have a Sass setup with an indefinite number of uniquely-designed pages (page_1, page_2, etc), each having their own sass/pages/page_1/page_1.scss file.
The pages all belong to the same website, and each page's sass file #imports the same set of files from a sass/includes folder.
With a basic gulp task watching sass/**/*, every page's styles get compiled anytime I make a change to any page's styles. Obviously this doesn't scale well.
I tried using gulp-watch, but it doesn't catch if changes are made to one of the included .scss files. It only catches changes made to the files that actually get compiled into an equivalent .css.
For the purposes of having my gulpfile be as DRY as possible, the best solution I could come up with was to maintain a basic array of folder names in gulpfile.js, and to loop through and watch each of them separately, using the same sass-compiling task for each folder.
var pageFolderNames = [
'page_1',
'page_2'
// etc
];
Then for the gulp task, I have:
gulp.task('watch_pages', function()
{
// Get array length
var numPages = pageFolderNames.length;
// Add a new watch task for each individual page
for (var i = 0; i < numPages; i++)
{
gulp.watch('sass/pages/' + pageFolderNames[i] + '/**/*.scss', ['sass_page']);
}
});
The (simplified) task that compiles sass:
// Task: Compile page-specific Sass
gulp.task('sass_page', function()
{
return gulp.src('sass/pages/' + pageFolderNames[i] +'/**/*.scss')
.pipe(plumber(plumberErrorHandler))
.pipe(sass(...))
.pipe(gulp.dest('css/pages/' + pageFolderNames[i]));
});
This approach (I know my JS-fu is weaksauce) results in an error:
'sass_page' errored after 71 μs
ReferenceError: i is not defined
Is there any way to pass parameters, such as i, to gulp tasks to get this working? Alternately, is there a better way to accomplish what I'm trying to do? I have a sneaking suspicion there is. :-/
I found out there is an on change event for gulp watch. So this might be what you're looking for:
var pagesDir = 'sass/pages/';
gulp.task('watch_pages', function() {
gulp.watch(pagesDir + '**/*')
.on("change", function(file) {
// absolute path to folder that needs watching
var changedDest = path.join(__dirname, pagesDir);
// relative path to changed file
var changedFile = path.relative(changedDest, file.path);
// split the relative path, get the specific folder with changes
var pageFolder = changedFile.split('\\')[0];
gulp.src(path.join(pagesDir, pageFolder) +'/**/*.scss')
.pipe(plumber(plumberErrorHandler))
.pipe(sass(...))
.pipe(gulp.dest('css/pages/' + pageFolder));
console.log(changedDest);
console.log(changedFile);
console.log(pageFolder);
});
});
Also, this way you don't need to declare the folder variables. If you add directories within the path being watched, it should pick it up and name the destination folder accordingly.
Theoretically the gulp task to compile sass should work within the watch task. I played around with the paths, and it seems to spitting them out. Let me know what happens, I can modify if necessary.
The required packages:
var gulp = require("gulp"),
path = require("path"),
rimraf = require("rimraf");
BTW, since you already have access to the file path, you can perhaps target the specific scss file instead of the whole directory.
As Brian answered, the best approach is to have one watcher. In the same way as the principle of delegation with dom event listeners. it's better, even in our case it can not really matter. We need our thing done. if the consumed resources and performance doesn't bother us. It's ok. But stay as Brian answered. one watcher is the best approach. then you've got to get the file that was changed. and from there you get your page folder. So for that i will not add a thing. Except that you don't necessarily need to use the .on("change". You can directly set the same parameter for your function as this example show:
watch('./app/tempGulp/json/**/*.json', function (evt) {
jsonCommentWatchEvt = evt
gulp.start('jsonComment')
})
evt here is what Brian set as file.
What i want to add, is about how to pass a parameter from your watcher to your task. For example about the i in your initial work. A way of doing that, that i see and use, is to set a global variable that hold the data wanted to be passed. So it's accessible in both blocks. You set the value in watcher just before starting the task. And then you use it in the task. A good practice, is to save it in a task local variable just at start. That way you avoid the problem that it can change by another watch handling triggering.
For a lively example. check my answer out here : https://stackoverflow.com/a/49733123/7668448
Related
Ok, I'm near to the finish line with my new PHP/JS app built with Gulp and Browserify. The last part is how to "boot", I mean how to do the "first call".
Let's say I have 3 JS entry points
/js/articles.js
/js/categories.js
/js/comments.js
each of them using some JS modules.
Then I have 3 HTML files, requiring their JS
/articles.html
/categories.html
/comments.html
example /js/articles.js
var $ = require("jquery");
var common = require("../common.js");
var viewModel = {
readData: function() {
/* read record from API and render */
},
insert: function() {
/* open a modal to insert new record */
}
};
What I should do now is to perform this sort of "boot": that is calling some init function I need, then load server data, then bind all buttons and stuff to viewModel's methods
$(document).ready(function() {
common.init();
viewModel.readData();
$('#btn-add').click(viewModel.insert);
});
Ok, but where am I to put this?
A) In HTML file?
I can't cause I don't have any global JS variabile to access..
B) Am I put it into articles.js?
At the moment, my Gulp task will bundle everything (articles.js, categories.js, comments.js, common libraries) into a single bundle.js.
If I put it into articles.js it will end up into the bundle.js. So articles-related boot stuff would be called in "categories" page either. And this is wrong.
C) Should I split articles.js into 2 files, one containing viewModel definition and the other doing the $(document).ready stuff?... but again how do I access to the correct viewModel?
Which is the correct solution?
Thank you
Seems your gulp task would just concat all the entries into bundle.js so you probably could just add another entry called js/index.js and put your initialization code inside it.
It's confusing that your code will be executed (base on your description in B) even though you don't call require on it. Can you provide your bundle.js and one of your html file?
I'm running a number of grunt tasks on a project. One of which sets a number of grunt.options grunt.option(key, value) which I need to access in a subsequent task var option = grunt.option(key). These options are returning undefined when I try to access them in the latter task.
If I log the variable at the head of the latter task, it is shown before that task is run and I am unable to access the previously set value in the tasks config.
Is there something I need to do between setting the grunt.option and using it in another task to notify grunt of the change? Am I doing something inherently wrong here? Or is there a better way to do this with a global variable of sorts (my research pointed me to using grunt.option)
My Gruntfile.js
grunt.log.writeln('loading tasks');
grunt.loadTasks('grunttasks');
grunt.log.writeln('tasks loaded');
grunt.registerTask(
'jenkins',[
'clean', //clears out any existing build folders
'curl', //gets build config file from remote server
'set-env', //sets the grunt.options based on the build config file
'string-replace:config', //attempts to access the new grunt.options
....
....
....
....
]
);
In my set-env task, I set some environment variables based on the contents of a text file returned in the curl task. This works fine and I can log all the grunt.options immediately after setting them so I know they are being set correctly.
set-env-task
module.exports = function(grunt) {
grunt.registerTask('set-env', 'set-env', function() {
......
......
for (var i = 0; i < propFile.length; i++) {
if (propFile[i] !== '') {
......
......
keyValues[propName] = propValue;
grunt.option(propName, propValue);
console.log("FROM GRUNT.OPTION " + grunt.option(propName));
......
......
}
}
......
......
});
};
When I try and access the grunt.options set in the above task from my string-replace (or any other subsequent) task undefined is returned. If I set test values to these grunt.options at the start of my Gruntfile.js I can access them with no issue:
module.exports = function(grunt) {
grunt.config('string-replace', {
..........
..........
config:{
files: configFiles,
options: {
replacements: [
..........
..........
{
pattern: /var _OPTION_KEY = \"(.*?)\"\;/ig,
replacement: 'var _OPTION_KEY = "'+grunt.option('_OPTION_KEY')+'";' //grunt.option('_OPTION_KEY') here is undefined
}
..........
..........
]
}
}
..........
..........
});
grunt.loadNpmTasks('grunt-string-replace');
}
(I have double, triple and quadruple checked that I'm using the correct option keys)
The problem is that you're accessing the variables from the grunt option set during the "config stage" of the task, which runs one time, before you set the options in your set-env task. Evaluating the custom option key at that point in the code indeed should yield undefined. (Note that this is practically the equivalent of using the initConfig block)
What you an do instead is instead of reading the option values from the options object, modify the config object of the task directly, using grunt.config.set, which would enable you to do what you've been trying.
So basically, instead of
grunt.option(propName, propValue);
use something like
grunt.config.set('mytask.replacements', options.replacements);
(Of course, this will require a sizable reworking of your code, I don't get into that.)
edit: probably there's an even cleaner solution using the templating functionality of grunt, see this stackoverflow answer, and the grunt api docs on templating:
Template strings can be processed manually using the provided template functions. In addition, the config.get method (used by many tasks) automatically expands <% %> style template strings specified as config data inside the Gruntfile.
The point being that these are evaluated not when the config block is parsed, but only when the task reads the values using config.get.
Your pattern of using the options object to share values between tasks works better if it's between two custom tasks of yours - you can set it in one task, read it in the other, not in the configuration, but as an actual step of running the tasks.
In general, although it seems doable, I'd say this is not the workflow grunt has in mind - if you know which environment you're running grunt in, it's easier to pass in the environment parameters through the options command-line flag directly when you run a grunt task, which would already take effect in any task configuration you're doing.
changing exports.X in a function seems to not work...
I want to be able to load settings from a file & access them in Node.js. I have this currently, however, the clients connecting to my node application can edit what's in the settings file. Unfortunately as it stands the Node application has to be restarted for the changes to take effect. Is there a way I can reload the module.exports on the fly?
EDIT:
Settings file is literally a JSON string.
My settings module is 'required' in almost every single file, and there's a lot of files... So reloading it per-file basis is out of the question. I do, however, know precisely when someone makes a change to the settings.
If you are using require to load the settings and only referencing the settings from one module, then doing something along the lines of:
delete require.cache[require.resolve(filename)];
will work for you.
If, on the other hand, multiple modules will be referencing these settings, that approach can become a bit unwieldy and open you up to unforeseen bugs. For example, if any of the modules are holding on to a reference to the required settings file, they would each need to somehow learn that the settings had changed and update their references.
To alleviate (not completely solve) the caching issue, you build your settings interface so that users of it must access either the settings object via a function and/or require that properties are accessed via functions. Even with this model, someone may still decide to cache a setting causing an obscure failure later down the road.
Using the simplest approach of a single getter for the settings object would look something like this:
var settings = require('./settings.json');
// ... watch for changes and reload by invalidating node's cache
module.exports = function() { return settings; }
Usage:
var settings = require('./path/to/settings');
settings().foo;
There are several libraries that do settings. Depending on your needs, I'm partial to nconf.
I'd set up a file watcher here that checks for changes of a JSON file dynamically. It is not recommended practice to change a JS script once the app is running.
Something like:
var _ = require("lodash");
var fs = require("fs");
var result = {};
fs.watch('my-settings.json',function(event,filename){
fs.readFile(filename,function(err,data){
if(err){
// your error catching
}
_.extend(result,JSON.parse(data));
});
});
module.exports = result;
Now, this comes with lots of caveats, first that fs.watch is not always supported by all platforms.
http://nodejs.org/api/fs.html#fs_fs_watch_filename_options_listener
Second, that it's really awkward to change a property like this. The expectation is generally that exports of module not mutate. I'd instead recommend exposing a method whose result can change based on the state of the file, a getter for the resulting data.
Third, a file watcher can be expensive, memory-wise.
This is better code, IMHO:
var _ = require("lodash");
var fs = require("fs");
var filename = 'my-settings.json';
var lastModified;
var mySetting;
module.exports = {
getSettingAsync : function (callback) {
fs.stat(filename,function(err,stat){
if(stat.mtime == lastModified) {
callback(mySetting);
} else {
fs.readFile(filename,function(err,data){
if(err){
// your error catching
}
// this assumes that your data is always correct
mySetting = JSON.parse(data).mySetting;
callback(mySetting);
});
}
});
}
};
In this case, we both check for a JSON file, and expose this as an async method. You could just as easily change the code to use the sync versions if need be and return the value instead of invoking the callback. This version checks when the file was changed, which is cheaper than reading the whole file every time, reads the file if newer and saves you the need to use a potentially buggy file watcher.
By the way, I've not tested this code and it may contain errors as is, but the concept is sound.
But, perhaps the more salient question, why not just store that value in the database?
I have a Meteor project that is starting to get out of hand with about 800 lines of code. I set out today to modularize and clean things up a bit and things are looking good, but I'm getting some errors that I don't know the best way how to deal with.
Here's a prime example.
I am using d3 to create a force layout (this question isnt specific to d3). I instantiate some variables and most notably
var force = d3.layout.force()
in a file
/client/views/force/forceLayout.js
I made a bunch of controls for this force layout and put them in their own .html and .js files. Heres an example
initChargeSlider = function() {
d3.select("#charge-slider")
.call(d3.slider()
.min(-301)
.max(-1)
.step(5)
.value(Session.get("charge"))
.on("slide", function(evt, value) {
Session.set("charge", value);
// force.stop();
force = force.charge(value);
force.start();
}));
}
Template.charge.rendered = function() {
initChargeSlider();
};
in file
/client/views/force/controls/sliders/charge.js
Due to Meteor loading deeper directories first, I get an error at force = force.charge(value) because forceLayout.js hasn't instantiated force yet.
I'm curious what is the most best way of dealing with this. Moving the files around and loading order is just reversing all the modularizing I just did. I think a singleton or an object or monad may be in order but I'm not sure which or why. I would appreciate an explanation of how to go about fixing these errors.
Thanks
Chet
Meteor before 0.6.5 run files without wrapping them inside a function wrapper (function() { /* your code */ })().
This behavior is still followed if you place your files in client/compatibility folder:
Some JavaScript libraries only work when placed in the
client/compatibility subdirectory. Files in this directory are
executed without being wrapped in a new variable scope. This means
that each top-level var defines a global variable. In addition, these
files are executed before other client-side JavaScript files.
Now, Meteor is more unforgiving of global variables and now one needs to be explicit about declaring them. Hence,
window.force = d3.layout.force()
or even
this.force = d3.layout.force(); // this === window in global context.
would solve the problem.
Ok so I have a .js file with about 10k lines of code. This code can be split up in
sub-object definitions
container object definitions
initialization code (after the objects have been defined)
program functionality
I would like to split this one file into 4 separate files, because it provides a better oversight. How do I go about doing this, given that they absolutely have to be declared in that order? What should I wrap up in a $(document).ready() and what not?
If I just separate the files and link them to the html in the correct order, I get undefined object errors. I was also thinking of something like this; but I don't know if that's any good...
Second JS File
function initializeContainers() {
var containerObj1 = {
bla: 'bla',
bla2: 'bla2'
},
var containerObj2 = {
bla: 'bla',
bla2: 'bla2'
};
};
First JS File
$(document).ready(function() {
function initializeSubObjects(callback) {
var subObj1 = {
somekey: 'somevalue',
someke2: 'someothervalue'
};
callback();
};
initializeSubObjects(initializeContainers);
});
I have no clue whether this is the correct way to do it?
PS: I also know you can add the script tags dynamically; but is that good practice?
In your example, you should swap the contents of your first and second file. You should only call the initializeContainers method when you know for sure the file has been loaded.
The easiest way to think about this is to load all files with definitions first (helpers, functions, classes, ...). Once all these are loaded, put the rest in the last file and start executing the code only in the last file
On a side note: If you deploy this into a production environment, you should consider bundling these files. Downloading 4 files will impact your load time, so it's better to just bundle them together and send them over as a single file. While you're at it, you probably also want to minify it.