I wrote a Nodejs script which finds the last changed/modified file names.
For that reason, I am using find CLI command. I have one hidden file .change to compare other files to that one (modified times).
Here is a code below:
const es6dir = 'es6';
const path2dir = './htdocs/';
const exec = require("child_process").exec;
exec(`find ${path2dir + es6dir}/ -type f -newer .change`, (error, stdout) => {
if(error){
console.log(`Error: ${error}`);
return;
}
console.log(stdout);
//update .change modified timestamp
exec('touch -c .change');
}
Everything works fine if I run this command in Git Bash but if I use windows terminal it says an incorrect command.
Is there a simple way which will work for Linux and Windows terminal at the same time?
I would like to run this command in both platforms because some of the team members are working on Linux while others are using windows machines.
Consider using Node's built-in fs.Stats over platform specific commands or utilities. The fs module exposing fs.stat method returns the property mtime comprising last modified time in milliseconds.
Cross compatibility can be achieved through child processes or using fs.stat and fs.writeFile.
Stats returns an object like such:
Stats {
dev: 16777220,
mode: 33188,
nlink: 1,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 5077219,
size: 11,
blocks: 8,
atimeMs: 1556271390822.264,
mtimeMs: 1556271389892.5886,
ctimeMs: 1556271389892.5886,
birthtimeMs: 1556270439285.706,
atime: 2019-04-26T09:36:30.822Z,
mtime: 2019-04-26T09:36:29.893Z,
ctime: 2019-04-26T09:36:29.893Z,
birthtime: 2019-04-26T09:20:39.286Z }
As suggested in comments and an answer, I agree this would be a better approach. Here is how you can approach creating a new file, and checking creation date.
const fs = require('fs');
// Directory
const PATH = './';
// Get file's stats
fs.stat(`./.change`, function(error, stats) {
if (error) { throw error; } // Throw if an error, file not found
let time = Date.now(); // Current Time
console.log(`Current .change: Created: `, stats['mtime']); // Created Time
// If current time > file creation time
if (time > stats['mtime']) {
// writeFile function with filename, content and callback function
fs.writeFile(`${PATH}/.change`, 'Inside File', function (error) {
if (error) { throw error; }
console.log('File is updated successfully.');
});
}
});
Related
I'm getting a weird error, and I can't seem to google the right things, as I'm finding no help online. I am writing a script that converts swagger files to typescript. The error message is the one in the title, and sadly that's all the information I have. I will post the code below, and the part where (I believe) the message is coming from:
async function getJson(){
const agent = new https.Agent({
rejectUnauthorized: false
});
return axios.get('https://common-customer-bpms.dev.havida.net/v3/api-docs', { httpsAgent: agent })
.then(response => generateSwagger(response))
}
getJson();
async function generateSwagger(response) {
try {
execSync(`java -jar ..\\swagger-codegen-cli.jar generate -l typescript-angular -o .\\projects\\common\\src -i ${response}`);
} catch (error){
console.log(error);
console.log('You must have Java installed! You may have to change JAVA_HOME location & path (Ex: set JAVA_HOME=`C:\\Programme\\Java\\jre1.8.0_321`), (set PATH=${JAVA_HOME}/bin:$PATH)')
}
}
I think the error is coming from the try block, the very last argument (-i ${response}). Am I able to use the parameter of the function in this way, or can I only use strings in cli commands? I'm at a loss
After multiple days of trying, here is the full code that works for anyone who may need it:
import axios from 'axios';
import https from 'node:https';
import {execSync} from 'child_process';
import fs from 'fs/promises';
async function getJson(){
const agent = new https.Agent({
rejectUnauthorized: false
});
return axios.get('https://common-customer-bpms.dev.havida.net/v3/api-docs', { httpsAgent: agent })
.then(response => fs.writeFile("temp.json", JSON.stringify(response.data), (error)=>{console.log(error)}))
.then(() => generateSwagger())
}
getJson();
async function generateSwagger() {
try {
execSync(`java -jar ..\\swagger-codegen-cli.jar generate -l typescript-angular -i temp.json -o .\\projects\\common\\src\\lib`);
} catch (error){
console.log(error);
console.log('You must have Java installed! You may have to change JAVA_HOME location & path (Ex: set JAVA_HOME=`C:\\Programme\\Java\\jre1.8.0_321`), (set PATH=${JAVA_HOME}/bin:$PATH)')
} finally {
fs.unlink("temp.json");
}
}
This pulls the json from the desired url, and writes it to a file. This file is then used as the destination for -i, and once the conversion is complete, the json is deleted in the finally block.
I'm trying to open explorer.exe from a Node.js script running inside WSL Ubuntu 20.04. The issue I've encountered is that explorer.exe never opens the folder I'd like it to. Instead of WSL user's home directory it opens my Windows user's Documents folder. What should I do to make explorer.exe open the folder I want?
Here's what I've tried:
The script first defines a function execShellCommand that promisifies exec. Then self-executing function first converts process.env.HOME to a Windows path with wslpath. Then it executes explorer.exe with the converted path as a parameter.
#!/usr/bin/node
const execShellCommand = async cmd => {
const exec = require('child_process').exec
return new Promise((resolve, reject) => {
exec(cmd, (error, stdout, stderr) => {
if (error) {
console.warn(error)
}
resolve(stderr ? stderr : stdout)
})
})
}
;(async () => {
const path = await execShellCommand(`wslpath -w "${process.env.HOME}"`)
console.log({ path })
await execShellCommand(`explorer.exe ${path}`)
})()
The output I get when I run my script in WSL
$ ./script.js
{ path: '\\\\wsl$\\Ubuntu-20.04\\home\\user\n' }
Error: Command failed: explorer.exe \\wsl$\Ubuntu-20.04\home\user
at ChildProcess.exithandler (child_process.js:308:12)
at ChildProcess.emit (events.js:315:20)
at maybeClose (internal/child_process.js:1048:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5) {
killed: false,
code: 1,
signal: null,
cmd: 'explorer.exe \\\\wsl$\\Ubuntu-20.04\\home\\user\n'
}
explorer.exe does run regardless of the error shown in the output. The weird part is that if I run the same command my script tries to run (explorer.exe \\\\wsl$\\Ubuntu-20.04\\home\\user\n) directly in WSL terminal explorer.exe does open the folder I want it to. Trimming the new line at the end of the path doesn't help.
I think you have to do some additional escaping on the backslashes that are produced by wslpath. The code below works for me, meaning it opens the correct directory in Windows Explorer.
Note: it does still throw the error you mentioned, which I think is due to the way node exits rather than anything wrong w/the execution of explorer.exe; I'm not a node expert by any stretch.
#!/usr/bin/node
const execShellCommand = async cmd => {
const exec = require('child_process').exec
return new Promise((resolve, reject) => {
exec(cmd, (error, stdout, stderr) => {
if (error) {
console.warn(error)
}
resolve(stderr ? stderr : stdout)
})
})
}
;(async () => {
let path = await execShellCommand(`wslpath -w "${process.env.HOME}"`)
console.log("before", {path});
path = path.replace(/\\/g,"\\\\");
console.log("after", {path});
await execShellCommand(`explorer.exe ${path}`)
})()
Even cleaner than replacing backslashes, I think this will work for you by resolving the $HOME variable directly into your command line:
await execShellCommand(`explorer.exe "$(wslpath -w $HOME)"`);
I am trying to get my Sequelize migration scripts to run automatically when my node application starts. I have manually tested the migration scripts to make sure they are running correctly, by running db:migrate command.
Now, I have added this file to run the migration scripts:
index.js
const {exec} = require('child_process');
const Sequelize = require('sequelize');
const config = require("config");
const sequelize = new Sequelize(config.get('postgres'));
async function start() {
await new Promise((resolve, reject) => {
const migrate = exec(
'npm run db:migrate',
{env: 'development'},
(err, stdout, stderr) => {
if (err) {
reject(err);
} else {
resolve();
}
}
);
// Forward stdout+stderr to this process
migrate.stdout.pipe(process.stdout);
migrate.stderr.pipe(process.stderr);
});
}
module.exports = {
start: start
};
And in server.js:
async function start(appStarted) {
logger.info('Initializing ...');
// execute pending migrations
logger.info('Migrating DB...');
await require('../migrations').start();
logger.info('DB Migration complete.');
When I start the app, it displays Migrating DB... and gets stuck there.
How can I resolve this?
You can listen for the console message and kill the child process, like this:
// Listen for the console.log message and kill the process to proceed to the next step in the npm script
migrate.stdout.on('data', (data) => {
console.log(data);
if (data.indexOf('No migrations were executed, database schema was already up to date.') !== -1) {
migrate.kill();
}
});
This will make sure that the child process is killed when you've already run your migrations.
I'm trying to deploy from GitHub using I want to execute more than one command, in order of the array. The code I'm using now is included below.
async.series([
...
// Deploy from GitHub
function (callback) {
// Console shizzle:
console.log('');
console.log('Deploying...'.red.bold);
console.log();
console.log();
var deployFunctions = [
{
command: 'cd ' + envOptions.folder + ' && pwd',
log: false
},
{
command: 'pwd'
},
{
command: 'su ' + envOptions.user,
log: false
},
{
command: 'git pull'
},
{
command: 'chmod 0777 * -R',
log: false
}
];
async.eachSeries(deployFunctions, function (item, callback) {
deployment.ssh2.exec(item.command, function (err, stream) {
deployment.logExec(item);
stream.on('data', function (data, extended) {
console.log(data.toString().trim());
console.log();
});
function done() {
callback(err);
}
stream.on('exit', done);
stream.on('end', done);
});
}, function () {
callback();
});
},
...);
But, after I cd'ed to the right directory, it forgets where it was and starts all over again.
$ cd /some/folder && pwd
/some/folder
$ pwd
/root
#robertklep is correct about why your cd doesn't persist. Each command invokes a distinct shell instance which starts in its initial state. You could prefix each command with cd /home/jansenstok/domains/alcoholtesterwinkel.com/public_html/ && as a quick fix, but really you are setting yourself up for pain. What you want is a shell script with all the power of multiple lines as opposed to a list of individual disconnected commands.
Look at using ssh2's sftp function to transfer a complete shell script to the remote machine as step 1, execute it via exec (/bin/bash /tmp/your_deploy_script.sh) as step 2, and then delete the script as step 3.
I know this is a super old question, but I ran into this problem while trying to manage an ACE through my Node server. The answer didn't work for me, but several searches later led me to a wrapper that worked really well for me. Just wanted to share here because this was the top link in my Google search. It's called ssh2shell and can be found here: https://www.npmjs.com/package/ssh2shell
It's very simple to use, just pass an array of commands and they run one by one waiting for each command to complete before moving on to the next.
A practical example:
const client = new Client();
const cmds = [
'ls -lah \n',
'cd /mnt \n',
'pwd \n',
'ls -lah \n',
'exit \n',
];
client.on('ready', () => {
console.log('Client :: ready');
client.shell((err, stream) => {
stream.on('close', (code) => {
console.log('stream :: close\n', { code });
}).on('data', (myData) => {
console.log('stream :: data\n', myData.toString());
}).on('exit', (code) => {
console.log('stream :: exit\n', { code });
client.end();
}).on('error', (e) => {
console.log('stream :: error\n', { e });
rej(e);
});
for (let i = 0; i < cmds.length; i += 1) {
const cmd = cmds[i];
stream.write(`${cmd}`);
}
});
}).connect({
host: '127.0.0.1',
port: 22,
username: 'root',
password: 'root',
});
all the examples in the doc use stream.end() which caused the creation of a new session instead of using the current one.
You cooldn't use "shell" on your program because "Shell" command invokes a new terminal on the system and does your jop. You need to use "exec" command without not emitting "exit" . Default "exec" command emits "exit" command after the command which you gave has been executed.
I would like to git pull, commit and push from nodeJS with child_process - is this suppose to work?
var cmd = require('child_process');
var commmandString = "cd c:\\xampp\\htdocs\\MenuMakerServer\\experiments\\editormenu && git commit -am 'menu.json changes' && git push origin main";
cmd.exec(commmandString , function (error: any, stdout, stderr) {
if (error) {
callback(error.stack, null);
}
});
EDIT:
OK,
I managed to get this to work:
var sys = require('sys')
var exec = require('child_process').exec;
function puts(error, stdout, stderr) { sys.puts(stdout) }
var options = {cwd:"c:\\xampp\\htdocs\\MenuMakerServer\\projects\\editormenu"};
exec("git status && git pull && git commit -am 'menu changed' && git push", options, puts);
Define a node.js module something like below code.
exports.series = function(cmds, callback){
var execNext = function(){
exports.exec(cmds.shift(), function(error){
if (error) {
callback(error);
} else {
if (cmds.length) execNext();
else callback(null);
}
});
};
execNext();
};
Then you can run it:
myProcessor.series([
'cd c:\\xampp\\htdocs\\MenuMakerServer\\experiments\\editormenu'
'git commit -am "menu.json changes"',
'git push origin main '
], function(err){
console.log('executed many commands in a row');
});
NOTE: Here myProcessor is the require variable name (somethig like var myProcessor = require('./path/to/above/code/file');) for the above code snippet.
No that won't work... it looks like you are combining both DOS shell commands and Unix shell commands. Specifically c:\ is DOS and using && to chain commands is Unix shell. Which environment are you using?
If you are using DOS then you need make a .bat and call the batch. This is nice becasuse you can use parameters.