How to resolve a path that includes an environment variable, in nodejs? - javascript

I would like to run an executable and its path contains an enviroment variable, for example if I would like to run chrome.exe I would like to write something like this
var spawn = require('child_process').spawn;
spawn('chrome',[], {cwd: '%LOCALAPPDATA%\\Google\\Chrome\\Application', env: process.env})
instead of
var spawn = require('child_process').spawn;
spawn('chrome',[], {cwd: 'C:\\Users\myuser\\AppData\\Local\\Google\\Chrome\\Application', env: process.env}).
Is there a package I can use in order to achieve this?

You can use a regex to replace your variable with the relevant property of process.env :
let str = '%LOCALAPPDATA%\\Google\\Chrome\\Application'
let replaced = str.replace(/%([^%]+)%/g, (_,n) => process.env[n])
I don't think a package is needed when it's a one-liner.

I realize that the question is asking for Windows environment variables, but I modified #Denys Séguret's answer to handle bash's ${MY_VAR} and $MY_VAR style formats as I thought it might be useful for others who came here.
Note: the two arguments are because there are two groupings based on the variations of the format.
str.replace(/\$([A-Z_]+[A-Z0-9_]*)|\${([A-Z0-9_]*)}/ig, (_, a, b) => process.env[a || b])

Adding a TypeScript friendly addition to the excellent answer by Denys Séguret:
let replaced = str.replace(/%([^%]+)%/g, (original, matched) => {
const r = Process.env[matched]
return r ? r : ''
})

Here is a generic helper function for this:
/**
* Replaces all environment variables with their actual value.
* Keeps intact non-environment variables using '%'
* #param {string} filePath The input file path with percents
* #return {string} The resolved file path
*/
function resolveWindowsEnvironmentVariables (filePath) {
if (!filePath || typeof(filePath) !== 'string') {
return '';
}
/**
* #param {string} withPercents '%USERNAME%'
* #param {string} withoutPercents 'USERNAME'
* #return {string}
*/
function replaceEnvironmentVariable (withPercents, withoutPercents) {
let found = process.env[withoutPercents];
// 'C:\Users\%USERNAME%\Desktop\%asdf%' => 'C:\Users\bob\Desktop\%asdf%'
return found || withPercents;
}
// 'C:\Users\%USERNAME%\Desktop\%PROCESSOR_ARCHITECTURE%' => 'C:\Users\bob\Desktop\AMD64'
filePath = filePath.replace(/%([^%]+)%/g, replaceEnvironmentVariable);
return filePath;
}
Can be called from anywhere
Does basic type checking first, you may want to change what is returned by default in the first if block
Functions are named in ways that explain what they do
Variables are named in ways that explain what they are
Comments added make it clear what outcomes can occur
Handles non-environment variables wrapped in percents, since the Windows file system allows for folders to be named %asdf%
JSDoc blocks for automated documentation, type checking, and auto-complete in certain editors
You may also want to use if (process.platform !== 'win32') {} depending on your need

These answers are crazy. Just can use path:
const folder = require('path').join(
process.env.LOCALAPPDATA,
'Google/Chrome/Application',
);
console.log(folder); // C:\Users\MyName\AppData\Local\Google\Chrome\Application

On Linux/MacOS, I spawn a process to resolve paths with env variables, is safe - let bash to do the work for you. Obviously less performant, but a lot more robust. Looks like this:
import * as cp from 'child_process';
// mapPaths takes an array of paths/strings with env vars, and expands each one
export const mapPaths = (searchRoots: Array<string>, cb: Function) => {
const mappedRoots = searchRoots.map(function (v) {
return `echo "${v}"`;
});
const k = cp.spawn('bash');
k.stdin.end(mappedRoots.join(';'));
const results: Array<string> = [];
k.stderr.pipe(process.stderr);
k.stdout.on('data', (d: string) => {
results.push(d);
});
k.once('error', (e) => {
log.error(e.stack || e);
cb(e);
});
k.once('exit', code => {
const pths = results.map((d) => {
return String(d || '').trim();
})
.filter(Boolean);
cb(code, pths);
});
};

Related

Hanging promise canceled

I'm trying to set Cloudflare's workers to track the circulation of some ERC20 tokens as an exercise to learn web3 and wasm. Thought it could be simple enough, but about 90% of the time so far has been trying to solve this elusive error
A hanging Promise was canceled. This happens when the worker runtime is waiting for a Promise from JavaScript to resolve but has detected that the Promise cannot possibly ever resolve because all code and events related to the Promise's request context have already finished.
I look for additional information online, but it seems my error is from a different type(?).
Here's a simple snippet of code to reproduce.
mod erc20_abi;
use erc20_abi::ERC20_ABI;
use cfg_if::cfg_if;
use ethers::{
contract::Contract,
core::{abi::Abi, types::Address},
prelude::{AbiError, U256},
providers::{Http, Provider},
};
use num_format::{Locale, ToFormattedString};
use std::convert::TryFrom;
use wasm_bindgen::prelude::*;
cfg_if! {
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
if #[cfg(feature = "wee_alloc")] {
extern crate wee_alloc;
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
}
}
#[wasm_bindgen]
pub async fn handle() -> String {
let web3_ethereum = Provider::<Http>::try_from(WEB3_URL_ETHEREUM).unwrap();
let abi: Abi = serde_json::from_str(ERC20_ABI).unwrap();
let token_contract_ethereum = Contract::new(parse_address(ADDRESS_ETH),
abi, web3_ethereum);
let convert_wei_to_decimal = |bignumber: U256| -> String {
(bignumber.as_u128() / u128::pow(10, 18)).to_formatted_string(&Locale::en)
};
// I believe this is the problem, since just returning a String works fine.
let total_supply_ethereum = token_contract_ethereum
.method::<_, U256>("totalSupply", ())
.unwrap()
.call()
.await
.unwrap();
convert_wei_to_decimal(total_supply_ethereum)
}
fn parse_address(address: &str) -> Address {
address.parse::<Address>().unwrap()
}
This is the worker/workers.js file
addEventListener('fetch', (event) => {
event.respondWith(handleRequest(event.request))
})
const { handle } = wasm_bindgen;
const instance = wasm_bindgen(wasm);
/**
* Fetch and log a request
* #param {Request} request
*/
async function handleRequest(request) {
await instance;
const output = await handle();
let res = new Response(output, { status: 200 });
res.headers.set('Content-type', 'text/html');
return res;
}
Cargo.toml
[package]
name = "circulating-supply"
version = "0.1.0"
license = "GPL-3.0-or-later"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
crate-type = ["cdylib", "rlib"]
[profile.release]
opt-level = 's' # Optimize for size.
lto = true
panic = "abort"
codegen-units = 1
[dependencies]
ethers = { git = "https://github.com/gakonst/ethers-rs" }
serde_json = "1.0.68"
num-format = "0.4.0"
cfg-if = "1.0.0"
wee_alloc = { version = "0.4.5", optional = true }
wasm-bindgen = "0.2.78"
wasm-bindgen-futures = "0.4.28"
js-sys = "0.3.55"
wrangler dev will compile it fine, but going to http://127.0.0.1:8787 will result in Error 1101
In my case a dependency used sth. not available in wasm runtime.
I guess ethers cryptography dependencies also depend on sth. like getrandom.
Adding this to Cargo.toml solved my issue.
[target.wasm32-unknown-unknown.dependencies]
getrandom = { version = "0.1", features = ["wasm-bindgen"] }
This will force your dependencies based on getrandom use the wasm features of getrandom.

How to configure jest snapshot locations

I'd like for my jest snapshots to be created as a sibling to my test file
I current have my snapshots being put in the default __snapshots__ folder.
Current:
What I would like to achieve:
I found this post on github: https://github.com/facebook/jest/issues/1650
In the thread someone says the following should work but I haven't had any luck (even with changing the regex and other changes):
module.exports = {
testPathForConsistencyCheck: 'some/example.test.js',
resolveSnapshotPath: (testPath, snapshotExtension) =>
testPath.replace(/\.test\.([tj]sx?)/, `${snapshotExtension}.$1`),
resolveTestPath: (snapshotFilePath, snapshotExtension) =>
snapshotFilePath.replace(snapshotExtension, '.test'),
}
In package.json (or if you use jest.config.js) you need to add the path for the snapshotResolver file:
"jest": {
"snapshotResolver": "./snapshotResolver.js"
}
snapshotResolver.js is a file with code that you found in a Github issue.
In my case this file was located at the root of the project (near node_modules folder)
These solutions are more complicated that is needed for what you are trying to do.
As per pointed out in https://github.com/facebook/jest/issues/1650
Solution
Create a file: I used - 'jest/snapshotResolver.js'
module.exports = {
resolveSnapshotPath: (testPath, snapshotExtension) =>
testPath + snapshotExtension,
resolveTestPath: (snapshotFilePath, snapshotExtension) =>
snapshotFilePath.replace(snapshotExtension, ''),
testPathForConsistencyCheck: 'some.test.js',
};
in your jest config set that file to the resolver
snapshotResolver: './jest/snapshotResolve.js',
or if your jest config in package.json:
"snapshotResolver": "./jest/snapshotResolve.js",
Explanation
In short these two functions mirror each other, one takes the test file path and returns the snapshot path, the other takes the snapshot path and returns the test file path. The third is a file path example for validation.
Clearer code To help clarify what is going on
One thing to keep in mind is that the path is the full path not the relative path.
/**
*
* #param testPath Path of the test file being test3ed
* #param snapshotExtension The extension for snapshots (.snap usually)
*/
const resolveSnapshotPath = (testPath, snapshotExtension) => {
const snapshotFilePath = testPath + snapshotExtension; //(i.e. some.test.js + '.snap')
return snapshotFilePath;
}
/**
*
* #param snapshotFilePath The filename of the snapshot (i.e. some.test.js.snap)
* #param snapshotExtension The extension for snapshots (.snap)
*/
const resolveTestPath = (snapshotFilePath, snapshotExtension) => {
const testPath = snapshotFilePath.replace(snapshotExtension, '').replace('__snapshots__/', ''); //Remove the .snap
return testPath;
}
/* Used to validate resolveTestPath(resolveSnapshotPath( {this} )) */
const testPathForConsistencyCheck = 'some.test.js';
module.exports = {
resolveSnapshotPath, resolveTestPath, testPathForConsistencyCheck
};
Also, in addition to the path to the snapshotResolver as suggested in the Vasyl Nahuliak's answer, to achieve having the snapshot and the test files looking like this:
file.test.js
file.test.js.snap
your snapshotResolver should look like this:
module.exports = {
testPathForConsistencyCheck: 'some/example.test.js',
resolveSnapshotPath: (testPath, snapshotExtension) =>
testPath.replace(/\.test\.([tj]sx?)/, `.test.$1${snapshotExtension}`),
resolveTestPath: (snapshotFilePath, snapshotExtension) =>
snapshotFilePath.replace(snapshotExtension, ''),
};

Faker.js Generating random paths doesn't work

I am using faker.js
https://www.npmjs.com/package/faker
to generate random data. It works fine although when I try to create a path like this
faker.system.directoryPath() + '/' + faker.system.filePath()
I have always got undefined in both, so it seems that it exists but doesn't return anything.
Did anyone use one of these methods before?
Thanks in advance, any help would be really appreciated.
Bye
Those functionalities are not implemented - take a look into https://github.com/Marak/faker.js/blob/master/lib/system.js#L132 and https://github.com/Marak/faker.js/blob/master/lib/system.js#L141
/**
* not yet implemented
*
* #method faker.system.filePath
*/
this.filePath = function () {
// TODO
};
Some proof of concept how it can be implemented:
var faker = require('faker');
var path = require('path');
faker.directoryPath = function() {
return path.format({base: faker.fake("{{random.words}}").replace(/ /g, path.sep).toLowerCase()})
}
console.log(faker.directoryPath() + path.sep + faker.system.fileName()) // e.g. avon\re-engineered\strategist_gorgeous_wooden_fish_cambridgeshire.sm

node module include data file as a global variable

I set out to make my first node module. It's very simple, it just takes the input from a user and validates if it is valid or not by comparing it to a large data file. The data file is just a json object. Here is the code for index.js
// borrowed heavily from http://quickleft.com/blog/creating-and-publishing-a-node-js-module
/**
* Valiate a NAICS codes using 2012 NAICS codes
*
* #param {String} 2 to 6 digit numeric
* #return {Boolean}
*/
var fs = require('fs');
var obj = JSON.parse(fs.readFileSync('codes.json', 'utf8'));
module.exports = {
validateNAICS: function(inNAICS) {
inNAICS = inNAICS || "";
if (inNAICS.length > 6) { return false };
if (inNAICS.length < 2) { return false };
if (obj.hasOwnProperty(inNAICS)) { return true };
return false;
},
/**
* Given a NAICS code return the industry name
*
* #param {String} 2 to 6 digit numeric
* #return {String}
*/
translateNAICS: function(inNAICS) {
inNAICS = inNAICS || "000";
if (obj.hasOwnProperty(inNAICS)) { return obj[inNAICS] };
return "Unknown";
}
};
However, when I try to use this in a project, the code seems to look for codes.js in the project's root rather than in the modules root. I was able to prove this by making a copy of codes.js and putting it in the project's root and everything worked as expected.
So what is the proper way to include the data from codes.js so that it is available to my two functions?
The easiest way is to simply require it via a relative path name:
var obj = require('./codes.json');
Requires are always relative to the module itself, not the current working directory. Node.js understand that this is a JSON file and parses it accordingly.
In general, you can get a reference to the directory the module is in via the __dirname variable. So the following works as well:
var path = require('path');
var obj = JSON.parse(fs.readFileSync(path.join(__dirname, 'codes.json'), 'utf8'));

How to create streams from string in Node.Js?

I am using a library, ya-csv, that expects either a file or a stream as input, but I have a string.
How do I convert that string into a stream in Node?
As #substack corrected me in #node, the new streams API in Node v10 makes this easier:
const Readable = require('stream').Readable;
const s = new Readable();
s._read = () => {}; // redundant? see update below
s.push('your text here');
s.push(null);
… after which you can freely pipe it or otherwise pass it to your intended consumer.
It's not as clean as the resumer one-liner, but it does avoid the extra dependency.
(Update: in v0.10.26 through v9.2.1 so far, a call to push directly from the REPL prompt will crash with a not implemented exception if you didn't set _read. It won't crash inside a function or a script. If inconsistency makes you nervous, include the noop.)
Do not use Jo Liss's resumer answer. It will work in most cases, but in my case it lost me a good 4 or 5 hours bug finding. There is no need for third party modules to do this.
NEW ANSWER:
var Readable = require('stream').Readable
var s = new Readable()
s.push('beep') // the string you want
s.push(null) // indicates end-of-file basically - the end of the stream
This should be a fully compliant Readable stream. See here for more info on how to use streams properly.
OLD ANSWER:
Just use the native PassThrough stream:
var stream = require("stream")
var a = new stream.PassThrough()
a.write("your string")
a.end()
a.pipe(process.stdout) // piping will work as normal
/*stream.on('data', function(x) {
// using the 'data' event works too
console.log('data '+x)
})*/
/*setTimeout(function() {
// you can even pipe after the scheduler has had time to do other things
a.pipe(process.stdout)
},100)*/
a.on('end', function() {
console.log('ended') // the end event will be called properly
})
Note that the 'close' event is not emitted (which is not required by the stream interfaces).
From node 10.17, stream.Readable have a from method to easily create streams from any iterable (which includes array literals):
const { Readable } = require("stream")
const readable = Readable.from(["input string"])
readable.on("data", (chunk) => {
console.log(chunk) // will be called once with `"input string"`
})
Note that at least between 10.17 and 12.3, a string is itself a iterable, so Readable.from("input string") will work, but emit one event per character. Readable.from(["input string"]) will emit one event per item in the array (in this case, one item).
Also note that in later nodes (probably 12.3, since the documentation says the function was changed then), it is no longer necessary to wrap the string in an array.
https://nodejs.org/api/stream.html#stream_stream_readable_from_iterable_options
Just create a new instance of the stream module and customize it according to your needs:
var Stream = require('stream');
var stream = new Stream();
stream.pipe = function(dest) {
dest.write('your string');
return dest;
};
stream.pipe(process.stdout); // in this case the terminal, change to ya-csv
or
var Stream = require('stream');
var stream = new Stream();
stream.on('data', function(data) {
process.stdout.write(data); // change process.stdout to ya-csv
});
stream.emit('data', 'this is my string');
Edit: Garth's answer is probably better.
My old answer text is preserved below.
To convert a string to a stream, you can use a paused through stream:
through().pause().queue('your string').end()
Example:
var through = require('through')
// Create a paused stream and buffer some data into it:
var stream = through().pause().queue('your string').end()
// Pass stream around:
callback(null, stream)
// Now that a consumer has attached, remember to resume the stream:
stream.resume()
There's a module for that: https://www.npmjs.com/package/string-to-stream
var str = require('string-to-stream')
str('hi there').pipe(process.stdout) // => 'hi there'
Another solution is passing the read function to the constructor of Readable (cf doc stream readeable options)
var s = new Readable({read(size) {
this.push("your string here")
this.push(null)
}});
you can after use s.pipe for exemple
in coffee-script:
class StringStream extends Readable
constructor: (#str) ->
super()
_read: (size) ->
#push #str
#push null
use it:
new StringStream('text here').pipe(stream1).pipe(stream2)
I got tired of having to re-learn this every six months, so I just published an npm module to abstract away the implementation details:
https://www.npmjs.com/package/streamify-string
This is the core of the module:
const Readable = require('stream').Readable;
const util = require('util');
function Streamify(str, options) {
if (! (this instanceof Streamify)) {
return new Streamify(str, options);
}
Readable.call(this, options);
this.str = str;
}
util.inherits(Streamify, Readable);
Streamify.prototype._read = function (size) {
var chunk = this.str.slice(0, size);
if (chunk) {
this.str = this.str.slice(size);
this.push(chunk);
}
else {
this.push(null);
}
};
module.exports = Streamify;
str is the string that must be passed to the constructor upon invokation, and will be outputted by the stream as data. options are the typical options that may be passed to a stream, per the documentation.
According to Travis CI, it should be compatible with most versions of node.
Heres a tidy solution in TypeScript:
import { Readable } from 'stream'
class ReadableString extends Readable {
private sent = false
constructor(
private str: string
) {
super();
}
_read() {
if (!this.sent) {
this.push(Buffer.from(this.str));
this.sent = true
}
else {
this.push(null)
}
}
}
const stringStream = new ReadableString('string to be streamed...')
In a NodeJS, you can create a readable stream in a few ways:
SOLUTION 1
You can do it with fs module. The function fs.createReadStream() allows you to open up a readable stream and all you have to do is pass the path of the file to start streaming in.
const fs = require('fs');
const readable_stream = fs.createReadStream('file_path');
SOLUTION 2
If you don't want to create file, you can create an in-memory stream and do something with it (for example, upload it somewhere). ​You can do this with stream module. You can import Readable from stream module and you can create a readable stream. When creating an object, you can also implement read() method which is used to read the data out of the internal buffer. If no data available to be read, null is returned. The optional size argument specifies a specific number of bytes to read. If the size argument is not specified, all of the data contained in the internal buffer will be returned.
const Readable = require('stream').Readable;
const readable_stream = new Readable({
​read(size) {
​// ...
​ }
});
SOLUTION 3
When you are fetching something over the network, that can be fetched like stream (for example you are fetching a PDF document from some API).
const axios = require('axios');
const readable_stream = await axios({
method: 'get',
url: "pdf_resource_url",
responseType: 'stream'
}).data;
SOLUTION 4
Third party packages can support creating of streams as a feature. That is a way with aws-sdk package that is usually used for uploading files to S3.
const file = await s3.getObject(params).createReadStream();
JavaScript is duck-typed, so if you just copy a readable stream's API, it'll work just fine. In fact, you can probably not implement most of those methods or just leave them as stubs; all you'll need to implement is what the library uses. You can use Node's pre-built EventEmitter class to deal with events, too, so you don't have to implement addListener and such yourself.
Here's how you might implement it in CoffeeScript:
class StringStream extends require('events').EventEmitter
constructor: (#string) -> super()
readable: true
writable: false
setEncoding: -> throw 'not implemented'
pause: -> # nothing to do
resume: -> # nothing to do
destroy: -> # nothing to do
pipe: -> throw 'not implemented'
send: ->
#emit 'data', #string
#emit 'end'
Then you could use it like so:
stream = new StringStream someString
doSomethingWith stream
stream.send()

Categories