Programmatically pack and publish NPM package with libnpm - javascript

I'd like to publish to NPM within my CI/build system, so I found libnpmpublish which seems to be the correct tool, but explicitly states that it doesn't pack your code into a tarball, although the publish API requires that you pass it a tarball (as compared to, say, a folder or a path).
Their suggested solution is
Since libnpmpublish does not generate tarballs itself, one way to build your own tarball for publishing is to do npm pack in the directory you wish to pack. You can then fs.createReadStream('my-proj-1.0.0.tgz') and pass that to libnpmpublish, along with require('./package.json').
Is there a programmatic (in Node) way to script this process? I looked around the NPM repositories and couldn't find a package that is dedicated to packaging, though I can find this code which seems to implement packing, but is in an archived repository- namely, it's not in libnpm.

The closest I can find is npm-packlist which, when given a folder, creates a list of files which can be forwarded to the NPM tar package, as demonstrated in the README for npm-packlist.

This is a bit of a hack using the command line, but it works for a similar concept where I'm creating and uploading a package via a REST command. I wrap in a promise that returns a stream so that I can easily add that to the formPost data:
const exec = require('child_process').exec;
return new Promise((resolve, reject) => {
exec(`npm pack ${t}`, { cwd: d }, (error, stdout, stderr) => {
if (error) {
console.error(error);
reject(error);
}
var f = d + p.sep + stdout.trim();
if (debug) console.log(`zip file "${f}" created.`);
resolve(fs.createReadStream(f))
});
});
Enjoy!

there is now libnpmpack, which will generate a tarball buffer in exactly the format that libnpmpublish expects, so this can be done with:
const pack = require('libnpmpack')
const { publish } = require('libnpmpublish')
async function packAndPublish(packagePath) {
// readPackageJson omitted for brevity
const manifest = readPackageJson(packagePath)
const tarball = await pack(packagePath)
return publish(manifest, tarball)
}

Related

How to monitor STDOUT for a child_process spawn session that has a terminal animation?

Let's say I want to run npm install inside a node.js and log the STDOUT. I could think of something like this:
var process = child_process.spawn("npm", ["install", package_name]);
process.stdout.on('data', function (chunk) {
console.log(chunk.toString());
});
While this kind of execution works for some cases, in some cases it errors out. It's not giving me enough information what exactly is causing the error so I can only guess.
One thing I noticed is, nowadays a lot of npm install program executions do NOT display the log in a serial manner but instead display animation inline and stuff.
Here's an example of what I'm talking about:
My question is:
Might this kind of animation why the stdout.on('data') is erroring out in some cases?
How do I deal with this situation? I just want to get the full stream of all the data
There is stdout and stderr. Maybe try to catch errors there? Here is part of my code where I use npm installer, but in a bit different way by utilizing npm-cli.js, which gives the option to use npm without its global installation on the server:
// Require child_process module
const { fork } = require('child_process');
// Working directory for subprocess of installer
const cwd = './path-where-to-run-npm-command';
// CLI path FROM cwd path! Pay attention
// here - path should be FROM your cwd directory
// to your locally installed npm module
const cli = '../node_modules/npm/bin/npm-cli.js';
// NPM arguments to run with
// If your working directory already contains
// package.json file, then just install it!
const args = ['install']; // Or, i.e ['audit', 'fix']
// Run installer
const installer = fork(cli, args, {
silent: true,
cwd: cwd
});
// Monitor your installer STDOUT and STDERR
installer.stdout.on('data', (data) => {
console.log(data);
});
installer.stderr.on('data', (data) => {
console.log(data);
});
// Do something on installer exit
installer.on('exit', (code) => {
console.log(`Installer process finished with code ${code}`);
});

Failed building JavaScript bundle. Cannot read property 'reduce' of undefined on react native module traverseDependencies.js

**Hi, I don't know what happen here, it's something release with a dependence, It's a code that I didn't write, it's a bundle and I use expo to run my code but, can't open my app because that error please help!! it's for a college project **
function resolveDependencies(parentPath, dependencies, options) {
const resolve = (parentPath, result) => {
const relativePath = result.name;
try {
return [
relativePath,
{
absolutePath: options.resolve(parentPath, relativePath),
data: result
}
];
} catch (error) {
Ignore unavailable optional dependencies. They are guarded with a try-catch block and will be handled during runtime.
if (result.data.isOptional !== true) {
throw error;
}
}
return undefined;
};
const resolved = dependencies.reduce((list, result) => {
const resolvedPath = resolve(parentPath, result);
if (resolvedPath) {
list.push(resolvedPath);
}
return list;
}, []);
return new Map(resolved);
}
Re-traverse the dependency graph in DFS order to reorder the modules and
guarantee the same order between runs. This method mutates the passed graph.
I had the same issue on the latest expo-cli 4.8.1.
For me helped following steps
downgrade from 4.8.1 -> 4.7.3 npm install -g expo-cli#~4.7.3
clear npm cache by executing npm cache clean --force
clear local user cache by deleting everything in C:\Users<user>\AppData\Local\Temp folder.
After these steps, it is working again
I had this issue when running expo start --dev-client on expo-cli version 4.12.1.
I solved it by adding the --clear flag (which clears the Metro bundler cache)
Deleting the system cache Temp folder content will work,
When I face the same issue this solution help me to solve the problem

How to read a symlink in Node.js

I want to read a symlink, and get the details of the link itself, not the contents of the linked file. How do I do that in Node, in a cross-platform way?
I can detect symlinks easily using lstat, no problem. Once I know the path of the file, and that it is a symlink though, how can I read it? fs.readFile always reads the target file, or throws an error for reading a directory for links to directories.
There is a fs.constants.O_SYMLINK constant, which in theory solves this on OSX, but it seems to be undefined on both Ubuntu & Windows 10.
If you have determined that the file is a symlink try this:
fs.readlink("./mysimlink", function (err, linkString) {
// .. do some error handling here ..
console.log(linkString)
});
Confirmed as working on Linux.
You could then use fs.realpath() to turn it into a full path. Be aware though that linkString can be just a filename or relative path as well as a fully qualified path so you may have to get fs.realpath() for the symlink, determine its directory part and prefix it to linkString before using fs.realpath() on it.
I've just faced the same issue: sometimes fs.readlink returns a relative path, sometimes it returns an absolute path.
(proper error handling not implemented to keep things simple)
const fs = require('fs');
const pathPckg = require('path');
async function getTarLinkOfSymLink(path){
return new Promise((resolve, reject)=>{
fs.readlink(path, (err, tarPath)=>{
if(err){
console.log(err.message);
return resolve('');
}
const baseSrcPath = pathPckg.dirname(path);
return resolve( pathPckg.resolve(baseSrcPath, tarPath) );
});
});
}
// usage:
const path = '/example/symbolic/link/path';
const tarPath = await getTarLinkOfSymLink(path);
The code works if the symbolic link is either a file or a directory/folder - tested on Linux

Execute a JS file (with logs, etc...) inside another NodeJS process

Here is my problem, I want to create a CLI that automatically runs a test. Without the CLI, I'm able to run everything perfectly with the node command:
node test.js
Basically, I want to do the exact same thing as the command before, so I googled for a technique that does this. I found this:
#!/usr/bin/env node
'use strict';
const options = process.argv;
const { execFile } = require('child_process');
const child = execFile('node', ['../dist/test.js'], (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
This method doesn't work for me because, in the test.js file, I'm using the ora package. And because this package is making real-time animations, it doesn't come in stdout.
Is there any way of executing in real time (without subprocess) my test.js using Node? I'm open to other methods, but I want to publish the CLI on NPM, so keep in mind that it has to be in JavaScript 😊.
You can find every file that I've talked here on GitHub. Normally, you wouldn't need this link, but I'm giving it to you if you need to have a closer look.
You should simply call your test() function from your CLI code, after requiring the module that defines it. Have a look at mocha and jasmine: you will see that while both tools provide a CLI, they also provide instructions for invoking the test frameworks from arbitrary JS code.
I can't think of a way without a sub-process. but this may help.
The child process exec will not work with the continuous output commands as it buffers the output the process will halt when that buffer is full.
The suitable solution is spwan :
var spwan = require('child_process').spwan
var child = spwan('node', ['../dist/test.js'])
child.stdout.on('data', function(data) {
console.log(data)
})
child.stderr.on('data', function(data) {
console.log(data)
})
Here is my solution, you can use the fs library to get the code of the file, and then, you simply use eval to execute in the same process.
const fs = require("fs");
function run(file) {
fs.readFile(file, (err, data) => {
eval(data.toString('utf8'))
})
}

How to execute shell command in Javascript

I want to write a JavaScript function which will execute the system shell commands (ls for example) and return the value.
How do I achieve this?
I'll answer assuming that when the asker said "Shell Script" he meant a Node.js backend JavaScript. Possibly using commander.js to use frame your code :)
You could use the child_process module from node's API. I pasted the example code below.
var exec = require('child_process').exec;
exec('cat *.js bad_file | wc -l',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
I don't know why the previous answers gave all sorts of complicated solutions. If you just want to execute a quick command like ls, you don't need async/await or callbacks or anything. Here's all you need - execSync:
const execSync = require('child_process').execSync;
// import { execSync } from 'child_process'; // replace ^ if using ES modules
const output = execSync('ls', { encoding: 'utf-8' }); // the default is 'buffer'
console.log('Output was:\n', output);
For error handling, add a try/catch block around the statement.
If you're running a command that takes a long time to complete, then yes, look at the asynchronous exec function.
...few year later...
ES6 has been accepted as a standard and ES7 is around the corner so it deserves updated answer. We'll use ES6+async/await with nodejs+babel as an example, prerequisites are:
nodejs with npm
babel
Your example foo.js file may look like:
import { exec } from 'child_process';
/**
* Execute simple shell command (async wrapper).
* #param {String} cmd
* #return {Object} { stdout: String, stderr: String }
*/
async function sh(cmd) {
return new Promise(function (resolve, reject) {
exec(cmd, (err, stdout, stderr) => {
if (err) {
reject(err);
} else {
resolve({ stdout, stderr });
}
});
});
}
async function main() {
let { stdout } = await sh('ls');
for (let line of stdout.split('\n')) {
console.log(`ls: ${line}`);
}
}
main();
Make sure you have babel:
npm i babel-cli -g
Install latest preset:
npm i babel-preset-latest
Run it via:
babel-node --presets latest foo.js
This depends entirely on the JavaScript environment. Please elaborate.
For example, in Windows Scripting, you do things like:
var shell = WScript.CreateObject("WScript.Shell");
shell.Run("command here");
In a nutshell:
// Instantiate the Shell object and invoke its execute method.
var oShell = new ActiveXObject("Shell.Application");
var commandtoRun = "C:\\Winnt\\Notepad.exe";
if (inputparms != "") {
var commandParms = document.Form1.filename.value;
}
// Invoke the execute method.
oShell.ShellExecute(commandtoRun, commandParms, "", "open", "1");
Note: These answers are from a browser based client to a Unix based web server.
Run command on client
You essentially can't. Security says only run within a browser and its access to commands and filesystem is limited.
Run ls on server
You can use an AJAX call to retrieve a dynamic page passing in your parameters via a GET.
Be aware that this also opens up a security risk as you would have to do something to ensure that mrs rouge hacker does not get your application to say run: /dev/null && rm -rf / ......
So in a nutshel, running from JS is just a bad, bad idea.... YMMV
With NodeJS is simple like that!
And if you want to run this script at each boot of your server, you can have a look on the forever-service application!
var exec = require('child_process').exec;
exec('php main.php', function (error, stdOut, stdErr) {
// do what you want!
});
function exec(cmd, handler = function(error, stdout, stderr){console.log(stdout);if(error !== null){console.log(stderr)}})
{
const childfork = require('child_process');
return childfork.exec(cmd, handler);
}
This function can be easily used like:
exec('echo test');
//output:
//test
exec('echo test', function(err, stdout){console.log(stdout+stdout+stdout)});
//output:
//testtesttest
Here is simple command that executes ifconfig shell command of Linux
var process = require('child_process');
process.exec('ifconfig',function (err,stdout,stderr) {
if (err) {
console.log("\n"+stderr);
} else {
console.log(stdout);
}
});
If you are using npm you can use the shelljs package
To install: npm install [-g] shelljs
var shell = require('shelljs');
shell.ls('*.js').forEach(function (file) {
// do something
});
See more: https://www.npmjs.com/package/shelljs
Another post on this topic with a nice jQuery/Ajax/PHP solution:
shell scripting and jQuery
In IE, you can do this :
var shell = new ActiveXObject("WScript.Shell");
shell.run("cmd /c dir & pause");
With nashorn you can write a script like this:
$EXEC('find -type f');
var files = $OUT.split('\n');
files.forEach(...
...
and run it:
jjs -scripting each_file.js
As far as I can tell, there is no built-in function, method or otherwise, in the official ECMAScript specification to run an external process. That said, extensions are allowed, see this note from the spec, for example:
NOTE Examples of built-in functions include parseInt and Math.exp. A
host or implementation may provide additional built-in functions that
are not described in this specification.
One such "host" is Node.js which has the child_process module. Let's try this code to execute the Linux shell command ps -aux, saved in runps.js, based on the child_process documentation:
const { spawn } = require('child_process');
const ps = spawn('ps', ['-aux']);
ps.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ps.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ps.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
Which produces the following example output, running it in docker:
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app node:17-bullseye node ./runps.js
stdout: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.8 319312 33888 ? Ssl 11:08 0:00 node ./runps.js
root 13 0.0 0.0 6700 2844 ? R 11:08 0:00 ps -aux
child process exited with code 0
The thing I like about this module, is that it's included with the Node.js distribution, no npm install ... needed.
If you search the Node.js code in github for spawn you will find references to the implementation in C or C++ in the engine. Modern browsers like Firefox and Chrome would be reluctant to extend JavaScript with such features, for obvious security reasons, even if the underlying engine such as V8 supports it.
On that note, it's better not to run our container as root, let's try the above example again, adding a random user this time.
$ docker run --rm -u 7000 -v "$PWD":/usr/src/app -w /usr/src/app node:17-bullseye node ./runps.js
stdout: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
7000 1 5.0 0.8 319312 33812 ? Ssl 11:19 0:00 node ./runps.js
7000 13 0.0 0.0 6700 2832 ? R 11:19 0:00 ps -aux
child process exited with code 0
Of course that's better but not enough. If this approach is used at all, more precautions must be taken, such as ensuring that no arbitrary user commands can be executed.
Windows 10
My version of Windows 10 still has Windows Script Host which can run JScript on the console with the wscript.exe or cscript.exe programs, i.e. no browser needed. To try it out you can open a PowerShell Windows Terminal. Save the following code into a file which you can call shell.js:
WScript.StdOut.WriteLine("Hallo, ECMAScript on Windows!");
WScript.CreateObject("WScript.Shell").run("C://Windows//system32//mspaint.exe");
And on the command line, run:
cscript .\shell.js
Which shows the following and opens Paint:
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
Hallo, ECMAScript on Windows!
Other variations exist. Find the documentation applicable to your preferred JavaScript runtime environment.
const fs = require('fs');
function ls(startPath) {
fs.readdir(startPath, (err, entries) => {
console.log(entries);
})
}
ls('/home/<profile_name>/<folder_name>')
The startPath used here is in reference with debian distro
Js file
var oShell = new ActiveXObject("Shell.Application");
oShell.ShellExecute("E:/F/Name.bat","","","Open","");
Bat file
powershell -Command "& {ls | Out-File -FilePath `E:F/Name.txt}"`
Js file run with node namefile.js
const fs = require('fs')
fs.readFile('E:F/Name.txt', (err, data) => {
if (err) throw err;
console.log(data.toString());
})
You can also do everything in one solution with an asynchronous function.
Directly there could be security problems.

Categories