How can I minimize duplication of code that sometimes needs a callback? - javascript

In my gulpfile, I have a task that processes all of my pages and another task that watches my pages for changes and processes just the changed page. It looks like this:
const buildPages = path => cb => {
gulp.src(path)
// some lines of piping
.pipe(gulp.dest(pages.dist));
cb();
}
const watchPages = () =>
gulp.watch(pages.src).on('change', path =>
gulp.src(path)
// the same piping as above
.pipe(gulp.dest(pages.dist))
);
The .on() method of the chokidar watcher object returned by gulp.watch() does not receive a callback function, while the gulp task above it requires one. So to remove code duplication, I can do this:
const buildPages = path =>
gulp.src(path)
// some lines of piping
.pipe(gulp.dest(pages.dist));
const buildPagesWithCallback = path => cb => {
buildPages(path)
cb();
}
const watchPages = () =>
gulp.watch(pages.src).on('change', path =>
buildPages(path)
);
Is this the right way to go about it, or is there a way to remove duplication without creating an extra function (perhaps by making the watcher receive a callback)?

Not sure what's your other requirements/needs, but given your description, I would usually have a setup like this (assuming you're using gulp 4):
const src = [ './src/pageA/**/*.js', ...others ];
const dist = './dist';
const buildPages = () => gulp.src(src).pipe(...).pipe(gulp.dest(dist));
const callback = () => { /* do stuff */ };
exports.buildPageWithCallback = gulp.series(buildPages, callback);
exports.watchPages = () => gulp.watch(src, gulp.series(buildPages));
exports.watchPageWithCallback = () => gulp.watch(src, gulp.series(buildPages, callback));
I would use gulp.series to run callback without explicitly passing a callback to a task maker function, and pass a task directly to gulp.watch.
If your requirements demand buildPages to take in a path, then I think the way you're doing it is right, sorry if I misunderstand the question

Related

Cannot send the data from main process to render in Electron

I am beginner to Electron, and I am developing an educational game. When the user clicks the start button, I want the main process to fetch information from the storage folder (I am ok with data being open to users). The main process accomplishes that without a problem.
main.js
ipcMain.on('load-sets', (event) => {
const directoriesInDIrectory = fs.readdirSync(folderName, { withFileTypes: true })
.filter((item) => item.isDirectory())
.map((item) => item.name);
event.sender.send("loaded-sets", directoriesInDIrectory);
})
Then, I use ipcRenderer in preload.js to receive the message.
preload.js
ipcRenderer.on("loaded-sets", (event, package) => {
window.loaded_sets = [];
for (var i = 0; i < package.length; i++) {
window.loaded_sets[i] = package[i];
}
})
Finally, I expose the sets via contextBridge:
contextBridge.exposeInMainWorld("api", {
LoadFiles,
quit,
sets: () => window.loaded_sets,
sentToRender,
})
However, when I run the following code in render.js:
console.log(window.api.sets())
it outputs undefined.
I've already tried using postMessage and eventListeners associated with it. Also, I've tried to get the variable via another function:
function sentToRender() {
return window.loaded_sets;
The function was also exposed and could be called in the renderer process. Yet, the output was still undefined.
For those wondering why I won't send the data straight to the renderer, the renderer returns error when I try require ipcRenderer and I heard that it is a good practice to navigate data through preload. Is there a solution?
In main.js
ipcMain.on('load-sets', (event) => {
const directoriesInDIrectory = fs.readdirSync(folderName, {
withFileTypes: true })
`enter code here`.filter((item) => item.isDirectory())
.map((item) => item.name);
event.sender.send("loaded-sets", directoriesInDIrectory);
})
In preload.js
contextBridge.exposeInMainWorld("api", (package)=> ipcRenderer.on("loaded-sets", (package)))
In renderer.js
window.api((event, package) => {
window.loaded_sets = [];
for (var i = 0; i < package.length; i++) {
window.loaded_sets[i] = package[i];
}
console.log(window.loaded_sets)
})
Add event emitting from renderer process if it’s wasn’t done.
ipcRenderer.send(“load-sets”);
To make the function work on main process side.
But if you don’t want to send the event from renderer process, then remove ipcMain listener and just send the event by doing
yourRenderedWindow.webContents.send("loaded-sets", directoriesInDIrectory);

Node can't find certain modules after synchronous install

I've got a script that synchronously installs non-built-in modules at startup that looks like this
const cp = require('child_process')
function requireOrInstall (module) {
try {
require.resolve(module)
} catch (e) {
console.log(`Could not resolve "${module}"\nInstalling`)
cp.execSync(`npm install ${module}`)
console.log(`"${module}" has been installed`)
}
console.log(`Requiring "${module}"`)
try {
return require(module)
} catch (e) {
console.log(require.cache)
console.log(e)
}
}
const http = require('http')
const path = require('path')
const fs = require('fs')
const ffp = requireOrInstall('find-free-port')
const express = requireOrInstall('express')
const socket = requireOrInstall('socket.io')
// List goes on...
When I uninstall modules, they get installed successfully when I start the server again, which is what I want. However, the script starts throwing Cannot find module errors when I uninstall the first or first two modules of the list that use the function requireOrInstall. That's right, the errors only occur when the script has to install either the first or the first two modules, not when only the second module needs installing.
In this example, the error will be thrown when I uninstall find-free-port, unless I move its require at least one spot down ¯\_(• _ •)_/¯
I've also tried adding a delay directly after the synchronous install to give it a little more breathing time with the following two lines:
var until = new Date().getTime() + 1000
while (new Date().getTime() < until) {}
The pause was there. It didn't fix anything.
#velocityzen came with the idea to check the cache, which I've now added to the script. It doesn't show anything out of the ordinary.
#vaughan's comment on another question noted that this exact error occurs when requiring a module twice. I've changed the script to use require.resolve(), but the error still remains.
Does anybody know what could be causing this?
Edit
Since the question has been answered, I'm posting the one-liner (139 characters!). It doesn't globally define child_modules, has no last try-catch and doesn't log anything in the console:
const req=async m=>{let r=require;try{r.resolve(m)}catch(e){r('child_process').execSync('npm i '+m);await setImmediate(()=>{})}return r(m)}
The name of the function is req() and can be used like in #alex-rokabilis' answer.
It seems that the require operation after an npm install needs a certain delay.
Also the problem is worse in windows, it will always fail if the module needs to be npm installed.
It's like at a specific event snapshot is already known what modules can be required and what cannot. Probably that's why require.cache was mentioned in the comments. Nevertheless I suggest you to check the 2 following solutions.
1) Use a delay
const cp = require("child_process");
const requireOrInstall = async module => {
try {
require.resolve(module);
} catch (e) {
console.log(`Could not resolve "${module}"\nInstalling`);
cp.execSync(`npm install ${module}`);
// Use one of the two awaits below
// The first one waits 1000 milliseconds
// The other waits until the next event cycle
// Both work
await new Promise(resolve => setTimeout(() => resolve(), 1000));
await new Promise(resolve => setImmediate(() => resolve()));
console.log(`"${module}" has been installed`);
}
console.log(`Requiring "${module}"`);
try {
return require(module);
} catch (e) {
console.log(require.cache);
console.log(e);
}
}
const main = async() => {
const http = require("http");
const path = require("path");
const fs = require("fs");
const ffp = await requireOrInstall("find-free-port");
const express = await requireOrInstall("express");
const socket = await requireOrInstall("socket.io");
}
main();
await always needs a promise to work with, but it's not needed to explicitly create one as await will wrap whatever it is waiting for in a promise if it isn't handed one.
2) Use a cluster
const cp = require("child_process");
function requireOrInstall(module) {
try {
require.resolve(module);
} catch (e) {
console.log(`Could not resolve "${module}"\nInstalling`);
cp.execSync(`npm install ${module}`);
console.log(`"${module}" has been installed`);
}
console.log(`Requiring "${module}"`);
try {
return require(module);
} catch (e) {
console.log(require.cache);
console.log(e);
process.exit(1007);
}
}
const cluster = require("cluster");
if (cluster.isMaster) {
cluster.fork();
cluster.on("exit", (worker, code, signal) => {
if (code === 1007) {
cluster.fork();
}
});
} else if (cluster.isWorker) {
// The real work here for the worker
const http = require("http");
const path = require("path");
const fs = require("fs");
const ffp = requireOrInstall("find-free-port");
const express = requireOrInstall("express");
const socket = requireOrInstall("socket.io");
process.exit(0);
}
The idea here is to re-run the process in case of a missing module. This way we fully reproduce a manual npm install so as you guess it works! Also it seems more synchronous rather the first option, but a bit more complex.
I think your best option is either:
(ugly) to install package globally, instead of locally
(best solution ?) to define YOUR new 'package repository installation', when installing, AND when requiring
First, you may consider using the npm-programmatic package.
Then, you may define your repository path with something like:
const PATH='/tmp/myNodeModuleRepository';
Then, replace your installation instruction with something like:
const npm = require('npm-programmatic');
npm.install(`${module}`, {
cwd: PATH,
save:true
}
Eventually, replace your failback require instruction, with something like:
return require(module, { paths: [ PATH ] });
If it is still not working, you may update the require.cache variable, for instance to invalide a module, you can do something like:
delete require.cache[process.cwd() + 'node_modules/bluebird/js/release/bluebird.js'];
You may need to update it manually, to add information about your new module, before loading it.
cp.execSync is an async call so try check if the module is installed in it's call back function. I have tried it, installation is clean now:
const cp = require('child_process')
function requireOrInstall (module) {
try {
require.resolve(module)
} catch (e) {
console.log(`Could not resolve "${module}"\nInstalling`)
cp.execSync(`npm install ${module}`, () => {
console.log(`"${module}" has been installed`)
try {
return require(module)
} catch (e) {
console.log(require.cache)
console.log(e)
}
})
}
console.log(`Requiring "${module}"`)
}
const http = require('http')
const path = require('path')
const fs = require('fs')
const ffp = requireOrInstall('find-free-port')
const express = requireOrInstall('express')
const socket = requireOrInstall('socket.io')
When node_modules not available yet :
When node_modules available already:

How to pass an argument from a gulp watcher to a task

If I have a watcher like this:
gulp.watch('js/**/*.js').on('change', path => {
gulp.series(build, reload)();
});
...and task build would look like this:
const build = done => {
return gulp
.src(path) // Use "path" here
.pipe(
rename({
dirname: ''
})
)
.pipe(uglify())
.pipe(gulp.dest('build'));
};
How can I pass path argument to the build task?
As an exercise I believe I got this working as you wanted. But first let me say that the traditional way of limiting the source pipeline would be with something like gulp-newer. You should see if that accomplishes what you want.
But here is something that may work for you [not well tested!]:
function build (path) {
return new Promise(resolve => {
// using setTimeout() to prove async/await is working as expected
setTimeout(() => {
resolve('resolved');
}, 2000);
// put your gulp.src pipeline here using 'path'
console.log("2 path = " + path);
});
};
function anotherTask (path) {
return new Promise(resolve => {
// put your gulp.src pipeline here
console.log("4 path = " + path); });
};
function testWatch () {
console.log("in testWatch");
// debounceDelay because gulp likes to call the watcher 2 or 3times otherwise
// see [gulp watch task running multiple times when a file is saved][2]
var watcher = gulp.watch('js/**/*.js', { debounceDelay: 2000 });
// I added the async/await because I wasn't sure those functions would be run in series
// as you wanted.
// With the event listener route I couldn't get gulp.series to work,
// so went with async/await.
watcher.on('change', async function(path, stats) {
console.log('1 File ' + path + ' was changed');
await build(path);
console.log("3 after build");
// I would assume that the **last** task in the chain doesn't need 'await'
// or to return a promise as in anotherTask
await anotherTask(path);
console.log("5 after anotherTask");
});
};
gulp.task('default', gulp.series(testWatch));
gulp watch running multiple times mentioned above in code.
Output (my js watch src is different than yours) :
in testWatch
1 File src\js\main.js was changed
2 path = src\js\main.js
3 after build
4 path = src\js\main.js
5 after anotherTask
1 File src\js\taxonomy.js was changed
2 path = src\js\taxonomy.js
3 after build
4 path = src\js\taxonomy.js
5 after anotherTask

Pointfree-Style with template-string in ramda.js

I've problems writing a pointfree-style function in ramda.js and wondered if anybody can help me with that. The getEnv function reads an env-variable and logs to console if it couldn't be found.
Here is my code
const env = name => R.path(['env', name], process);
const getEnv = name => R.pipe(
env,
R.when(R.isNil, () => log(`Missing env "${name}"`))
)(name);
console.log(getEnv('myenv'))
I'd want to drop the name parameter of the getEnv function (and if possible also on the env function) but don't know how to do this.
The function getEnv does more than it should. It actualy returns the content of the path or logs a validation message.
Split it into two separate functions. In my example below, I call it findPath andvalidatePath, which works generically for all paths. I've wrapped validatePath into another function calledvalidateEnvPath, which searches directly for "env"
To get rid of env you can do the following: R.flip (R.curry (R.path)). This will turn the function curry and then the arguments around, so you can tell the function where you want to query first
const process = {env: {myenv: ':)'}}
const path = R.flip(R.curry(R.path))
const findPathInProcess = R.pipe(
path (process),
R.ifElse(
R.isNil,
R.always(undefined),
R.identity
)
)
const validatePath = path =>
validationPathResponse (findPathInProcess( path )) (`can't find something under [${path}]`)
const validateEnvPath = path =>
validatePath (buildPath (['env']) (path))
const buildPath = xs => x =>
xs.concat(x)
const validationPathResponse = response => errorMessage =>
response
? response
: errorMessage
console.log(validatePath(['env', 'myenv']))
console.log(validateEnvPath('myenv'))
console.log(validateEnvPath('yourenv'))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
Consider using Either monad. Sanctuary has it already implemented and plays nice with its brother ramda:
const viewEnv = S.pipe ([
S.flip (R.append) (['env']),
R.lensPath,
R.view,
S.T (process),
S.toEither ('Given variable could not be retrieved')
])
const log = R.tap (console.log)
const eitherSomeVar = viewEnv ('someVar')
const eitherWhatever = S.bimap (log) (doSomeOtherStuff)
In addition one could also write the following
const path = R.flip(R.path) // already curried
const findPathInProcess = R.pipe(
path(process),
R.when(
R.isNil,
R.always(undefined)
)
)

Parsing multiple large JSON files with node to mongoDB

I am parsing multiple large JSON files to my mongoDB database. At the moment I am using stream-json npm package. After I load one file I change the filename that I am loading and relaunch the script to load the next file. This is unnecessarily time consuming. So how can I iterate through all the files automatically? At the moment my code looks like this:
const StreamArray = require('stream-json/utils/StreamArray');
const path = require('path');
const fs = require('fs');
const filename = path.join(__dirname, './data/xa0.json'); //The next file is named xa1.json and so on.
const stream = StreamArray.make();
stream.output.on('data', function (object) {
// my function block
});
stream.output.on('end', function () {
console.log('File Complete');
});
fs.createReadStream(filename).pipe(stream.input);
I tried iterating through the names of the files by adding a loop that would add +1 to the filename i.e. xa0 to xa1 at the same point where the script console.log('File Complete') but this didn't work. Any ideas how I might be able to achieve this or something similar.
Just scan your JSON files directory using fs.readdir. It will return a list of file names that you can then iterate, something like this :
fs.readdir("./jsonfiles", async (err, files) => {
for( file in files ){
await saveToMongo("./jsonfiles/" + file)
}
})
So you just launch your script once and wait until full completion.
Of course, in order for it to be awaited, you need to promisify the saveToMongo function, something like :
const saveToMongo = fileName => {
return new Promise( (resolve, reject) => {
// ... logic here
stream.output.on('end', function () {
console.log('File Complete');
resolve() // Will trigger the next await
});
})
}

Categories