I am working on a script that runs during our build process in Jenkins right before npm install. My issue is that I need to download a JavaScript file from an external resource and read a variable from it.
unzipper.on('extract', () => {
const content = fs.readFileSync(`${outputDir}/js/en.js`, 'utf8');
eval(content); // Less smellier alternative?
if (obj) {
const str = JSON.stringify(obj);
fs.writeFileSync(`${outputDir}/public/data.json`, str);
} else {
throw 'Variable `obj` not found';
}
});
I know that "eval is evil", but any suggested alternatives I've found online don't seem to work.
I have tried different variations of new Function(obj)(), but Node seems to exit the script after (the if-case never runs).
Ideas?
Since node.js provides the API to talk to the V8 runner directly, it might be a good idea to use it. Basically, it's the API used by node's require under the hood.
Assuming the js file in question contains the variable obj we're interested in, we do the following:
read the code from the file
append ; obj to the code to make sure it's the last expression it evaluates
pass the code to V8 for evaluation
grab the return value (which is our obj):
const fs = require('fs'),
vm = require('vm');
const code = fs.readFileSync('path-to-js-file', 'utf8');
const obj = vm.runInNewContext(code + ';obj');
This answer is heavily based on #georg's comments, but since it helped me I'll provide it as an alternative answer.
Explanation in the comments.
let content = fs.readFileSync(`${outputDir}/js/en.js`, 'utf8');
content += '; module.exports=obj'; // Export "obj" variable
fs.writeFileSync(`${outputDir}/temp`, content); // Create a temporary file
const obj = require(`${outputDir}/temp`); // Import the variable from the temporary file
fs.unlinkSync(`${outputDir}/temp`); // Remove the temporary file
Related
I would like to make use of a function called executeJavaScript() from the Electron webContents API. Since it is very close to eval() I will use this in the example.
The problem:
I have a decent sized script but it is contained in a template string.
Expanding this app, the script could grow a lot as a string.
I am not sure what the best practices are for this.
I also understand that eval() is dangerous, but I am interested in the principal of my question.
Basic eval example for my question:
// Modules
const fs = require('fs');
// CONSTANTS
const EXAMPLE_1 = 'EXAMPLE_1';
const EXAMPLE_2 = 'EXAMPLE_2';
const EXAMPLE_3 = 'EXAMPLE_3';
const exampleScriptFunction = require('./exampleScriptFunction');
const exampleScriptFile = fs.readFileSync('./exampleScriptFile.js');
// using direct template string
eval(`console.log(${EXAMPLE_1})`);
// using a method from but this doesnt solve the neatness issue.
eval(exampleScriptFunction(EXAMPLE_2));
// What I want is to just use a JS file because it is neater.
eval(`${exampleScriptFile}`);
exampleScriptFunction.js
module.exports = function(fetchType) {
return `console.log(${fetchType});`;
}
This will allow me to separate the script to a new file
what if I have many more then 1 variable???
exampleScriptFile.js:
console.log(${EXAMPLE_3});
This clearly does not work, but I am just trying to show my thinking.
back ticks are not present, fs loads as string, main file has back ticks.
This does not work. I do not know how else to show what I mean.
Because I am loading this will readFileSync, I figured the es6 template string would work.
This allows me to write a plain js file with proper syntax highlighting
The issue is the variables are on the page running the eval().
Perhaps I am completely wrong here and looking at this the wrong way. I am open to suggestions. Please do not mark me minus 1 because of my infancy in programming. I really do not know how else to ask this question. Thank you.
Assuming your source is stored in exampleScriptFile:
// polyfill
const fs = { readFileSync() { return 'console.log(`${EXAMPLE_3}`);'; } };
// CONSTANTS
const EXAMPLE_1 = 'EXAMPLE_1';
const EXAMPLE_2 = 'EXAMPLE_2';
const EXAMPLE_3 = 'EXAMPLE_3';
const exampleScriptFile = fs.readFileSync('./exampleScriptFile.js');
// What I want is to just use a JS file because it is neater.
eval(exampleScriptFile);
Update
Perhaps I wasn't clear. The ./exampleScriptFile.js should be:
console.log(`${EXAMPLE_3}`);
While what you're describing can be done with eval as #PatrickRoberts demonstrates, that doesn't extend to executeJavaScript.
The former runs in the caller's context, while the latter triggers an IPC call to another process with the contents of the code. Presumably this process doesn't have any information on the caller's context, and therefore, the template strings can't be populated with variables defined in this context.
Relevant snippets from electron/lib/browsers/api/web-contents.js:
WebContents.prototype.send = function (channel, ...args) {
// ...
return this._send(false, channel, args)
}
// ...
WebContents.prototype.executeJavaScript = function (code, hasUserGesture, callback) {
// ...
return asyncWebFrameMethods.call(this, requestId, 'executeJavaScript',
// ...
}
// ...
const asyncWebFrameMethods = function (requestId, method, callback, ...args) {
return new Promise((resolve, reject) => {
this.send('ELECTRON_INTERNAL_RENDERER_ASYNC_WEB_FRAME_METHOD', requestId, method, args)
// ...
})
}
Relevant snippets from electron/atom/browser/api/atom_api_web_contents.cc
//...
void WebContents::BuildPrototype(v8::Isolate* isolate,
v8::Local<v8::FunctionTemplate> prototype) {
prototype->SetClassName(mate::StringToV8(isolate, "WebContents"));
mate::ObjectTemplateBuilder(isolate, prototype->PrototypeTemplate())
// ...
.SetMethod("_send", &WebContents::SendIPCMessage)
// ...
}
I have another question (last question). At the moment i am working on a Node.js project and in this I have many console.log() functions. This has worked okay so far but I also want everything that's written to the console to also be written in a log-file. Can someone please help me?
For example:
Console.log('The value of array position [5] is '+ array[5]);
In my real code its a bit more but this should give you an idea.
Thank you hopefully.
Just run the script in your terminal like this...
node script-file.js > log-file.txt
This tells the shell to write the standard output of the command node script-file.js to your log file instead of the default, which is printing it to the console.
This is called redirection and its very powerful. Say you wanted to write all errors to a separate file...
node script-file.js >log-file.txt 2>error-file.txt
Now all console.log are written to log-file.txt and all console.error are written to error-file.txt
I would use a library instead of re-inventing the wheel. I looked for a log4j-type library on npm, and it came up with https://github.com/nomiddlename/log4js-node
if you want to log to the console and to a file:
var log4js = require('log4js');
log4js.configure({
appenders: [
{ type: 'console' },
{ type: 'file', filename: 'logs/cheese.log', category: 'cheese' }
]
});
now your code can create a new logger with
var logger = log4js.getLogger('cheese');
and use the logger in your code
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
const fs = require('fs');
const myConsole = new console.Console(fs.createWriteStream('./output.txt'));
myConsole.log('hello world');
This will create an output file with all the output which can been triggered through console.log('hello world') inside the console.
This is the easiest way to convert the console.log() output into a text file.`
You could try overriding the built in console.log to do something different.
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
}
However, this places the originalLog into the main scope, so you should try wrapping it in a function. This is called a closure, and you can read more about them here.
(function(){
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
})();
To write files, see this stackoverflow question, and to override console.log even better than the way I showed, see this. Combining these two answers will get you the best possible solution.
Just write your own log function:
function log(message) {
console.log(message);
fs.writeFileSync(...);
}
Then replace all your existing calls to console.log() with log().
#activedecay's answer seems the way to go. However, as of april 30th 2018, I have had trouble with that specific model (node crashed due to the structure of the object passed on to .configure, which seems not to work in the latest version). In spite of that, I've managed to work around an updated solution thanks to nodejs debugging messages...
const myLoggers = require('log4js');
myLoggers.configure({
appenders: { mylogger: { type:"file", filename: "path_to_file/filename" } },
categories: { default: { appenders:["mylogger"], level:"ALL" } }
});
const logger = myLoggers.getLogger("default");
Now if you want to log to said file, you can do it just like activedecay showed you:
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
This however, will not log anything to the console, and since I haven't figured out how to implement multiple appenders in one logger, you can still implement the good old console.log();
PD: I know that this is a somewhat old thread, and that OP's particular problem was already solved, but since I came here for the same purpose, I may as well leave my experience so as to help anyone visiting this thread in the future
Here is simple solution for file logging
#grdon/logger
const logger = require('#grdon/logger')({
defaultLogDirectory : __dirname + "/logs",
})
// ...
logger(someParams, 'logfile.txt')
logger(anotherParams, 'anotherLogFile.log')
Does anyone know how to get the name of a module in node.js / javascript
so lets say you do
var RandomModule = require ("fs")
.
.
.
console.log (RandomModule.name)
// -> "fs"
If you are trying to trace your dependencies, you can try using require hooks.
Create a file called myRequireHook.js
var Module = require('module');
var originalRequire = Module.prototype.require;
Module.prototype.require = function(path) {
console.log('*** Importing lib ' + path + ' from module ' + this.filename);
return originalRequire(path);
};
This code will hook every require call and log it into your console.
Not exactly what you asked first, but maybe it helps you better.
And you need to call just once in your main .js file (the one you start with node main.js).
So in your main.js, you just do that:
require('./myRequireHook');
var fs = require('fs');
var myOtherModule = require('./myOtherModule');
It will trace require in your other modules as well.
This is the way transpilers like babel work. They hook every require call and transform your code before load.
I don't know why you would need that, but there is a way.
The module variable, which is loaded automatically in every node.js file, contains an array called children. This array contains every child module loaded by require in your current file.
This way, you need to strict compare your loaded reference with the cached object version in this array in order to discover which element of the array corresponds to your module.
Look this sample:
var pieces = require('../src/routes/headers');
var childModuleId = discoverChildModuleId(pieces, module);
console.log(childModuleId);
function discoverChildModuleId(object, moduleObj) {
"use strict";
var childModule = moduleObj.children.find(function(child) {
return object === child.exports;
});
return childModule && childModule.id;
}
This code will find the correspondence in children object and bring its id.
I put module as a parameter of my function so you can export it to a file. Otherwise, it would show you modules of where discoverChildModule function resides (if it is in the same file won't make any difference, but if exported it will).
Notes:
Module ids have the full path name. So don't expect finding ../src/routes/headers. You will find something like: /Users/david/git/...
My algorithm won't detect exported attributes like var Schema = require('mongoose').Schema. It is possible to make a function which is capable of this, but it will suffer many issues.
From within a module (doesn't work at the REPL) you can...
console.log( global.process.mainModule.filename );
And you'll get '/home/ubuntu/workspace/src/admin.js'
There are some third party Javascript libraries that have some functionality I would like to use in a Node.js server. (Specifically I want to use a QuadTree javascript library that I found.) But these libraries are just straightforward .js files and not "Node.js libraries".
As such, these libraries don't follow the exports.var_name syntax that Node.js expects for its modules. As far as I understand that means when you do module = require('module_name'); or module = require('./path/to/file.js'); you'll end up with a module with no publicly accessible functions, etc.
My question then is "How do I load an arbitrary javascript file into Node.js such that I can utilize its functionality without having to rewrite it so that it does do exports?"
I'm very new to Node.js so please let me know if there is some glaring hole in my understanding of how it works.
EDIT: Researching into things more and I now see that the module loading pattern that Node.js uses is actually part of a recently developed standard for loading Javascript libraries called CommonJS. It says this right on the module doc page for Node.js, but I missed that until now.
It may end up being that the answer to my question is "wait until your library's authors get around to writing a CommonJS interface or do it your damn self."
Here's what I think is the 'rightest' answer for this situation.
Say you have a script file called quadtree.js.
You should build a custom node_module that has this sort of directory structure...
./node_modules/quadtree/quadtree-lib/
./node_modules/quadtree/quadtree-lib/quadtree.js
./node_modules/quadtree/quadtree-lib/README
./node_modules/quadtree/quadtree-lib/some-other-crap.js
./node_modules/quadtree/index.js
Everything in your ./node_modules/quadtree/quadtree-lib/ directory are files from your 3rd party library.
Then your ./node_modules/quadtree/index.js file will just load that library from the filesystem and do the work of exporting things properly.
var fs = require('fs');
// Read and eval library
filedata = fs.readFileSync('./node_modules/quadtree/quadtree-lib/quadtree.js','utf8');
eval(filedata);
/* The quadtree.js file defines a class 'QuadTree' which is all we want to export */
exports.QuadTree = QuadTree
Now you can use your quadtree module like any other node module...
var qt = require('quadtree');
qt.QuadTree();
I like this method because there's no need to go changing any of the source code of your 3rd party library--so it's easier to maintain. All you need to do on upgrade is look at their source code and ensure that you are still exporting the proper objects.
There is a much better method than using eval: the vm module.
For example, here is my execfile module, which evaluates the script at path in either context or the global context:
var vm = require("vm");
var fs = require("fs");
module.exports = function(path, context) {
context = context || {};
var data = fs.readFileSync(path);
vm.runInNewContext(data, context, path);
return context;
}
And it can be used like this:
> var execfile = require("execfile");
> // `someGlobal` will be a global variable while the script runs
> var context = execfile("example.js", { someGlobal: 42 });
> // And `getSomeGlobal` defined in the script is available on `context`:
> context.getSomeGlobal()
42
> context.someGlobal = 16
> context.getSomeGlobal()
16
Where example.js contains:
function getSomeGlobal() {
return someGlobal;
}
The big advantage of this method is that you've got complete control over the global variables in the executed script: you can pass in custom globals (via context), and all the globals created by the script will be added to context. Debugging is also easier because syntax errors and the like will be reported with the correct file name.
The simplest way is: eval(require('fs').readFileSync('./path/to/file.js', 'utf8'));
This works great for testing in the interactive shell.
AFAIK, that is indeed how modules must be loaded.
However, instead of tacking all exported functions onto the exports object, you can also tack them onto this (what would otherwise be the global object).
So, if you want to keep the other libraries compatible, you can do this:
this.quadTree = function () {
// the function's code
};
or, when the external library already has its own namespace, e.g. jQuery (not that you can use that in a server-side environment):
this.jQuery = jQuery;
In a non-Node environment, this would resolve to the global object, thus making it a global variable... which it already was. So it shouldn't break anything.
Edit:
James Herdman has a nice writeup about node.js for beginners, which also mentions this.
I'm not sure if I'll actually end up using this because it's a rather hacky solution, but one way around this is to build a little mini-module importer like this...
In the file ./node_modules/vanilla.js:
var fs = require('fs');
exports.require = function(path,names_to_export) {
filedata = fs.readFileSync(path,'utf8');
eval(filedata);
exported_obj = {};
for (i in names_to_export) {
to_eval = 'exported_obj[names_to_export[i]] = '
+ names_to_export[i] + ';'
eval(to_eval);
}
return exported_obj;
}
Then when you want to use your library's functionality you'll need to manually choose which names to export.
So for a library like the file ./lib/mylibrary.js...
function Foo() { //Do something... }
biz = "Blah blah";
var bar = {'baz':'filler'};
When you want to use its functionality in your Node.js code...
var vanilla = require('vanilla');
var mylibrary = vanilla.require('./lib/mylibrary.js',['biz','Foo'])
mylibrary.Foo // <-- this is Foo()
mylibrary.biz // <-- this is "Blah blah"
mylibrary.bar // <-- this is undefined (because we didn't export it)
Don't know how well this would all work in practice though.
I was able to make it work by updating their script, very easily, simply adding module.exports = where appropriate...
For example, I took their file and I copied to './libs/apprise.js'. Then where it starts with
function apprise(string, args, callback){
I assigned the function to module.exports = thus:
module.exports = function(string, args, callback){
Thus I'm able to import the library into my code like this:
window.apprise = require('./libs/apprise.js');
And I was good to go. YMMV, this was with webpack.
A simple include(filename) function with better error messaging (stack, filename etc.) for eval, in case of errors:
var fs = require('fs');
// circumvent nodejs/v8 "bug":
// https://github.com/PythonJS/PythonJS/issues/111
// http://perfectionkills.com/global-eval-what-are-the-options/
// e.g. a "function test() {}" will be undefined, but "test = function() {}" will exist
var globalEval = (function() {
var isIndirectEvalGlobal = (function(original, Object) {
try {
// Does `Object` resolve to a local variable, or to a global, built-in `Object`,
// reference to which we passed as a first argument?
return (1, eval)('Object') === original;
} catch (err) {
// if indirect eval errors out (as allowed per ES3), then just bail out with `false`
return false;
}
})(Object, 123);
if (isIndirectEvalGlobal) {
// if indirect eval executes code globally, use it
return function(expression) {
return (1, eval)(expression);
};
} else if (typeof window.execScript !== 'undefined') {
// if `window.execScript exists`, use it
return function(expression) {
return window.execScript(expression);
};
}
// otherwise, globalEval is `undefined` since nothing is returned
})();
function include(filename) {
file_contents = fs.readFileSync(filename, "utf8");
try {
//console.log(file_contents);
globalEval(file_contents);
} catch (e) {
e.fileName = filename;
keys = ["columnNumber", "fileName", "lineNumber", "message", "name", "stack"]
for (key in keys) {
k = keys[key];
console.log(k, " = ", e[k])
}
fo = e;
//throw new Error("include failed");
}
}
But it even gets dirtier with nodejs: you need to specify this:
export NODE_MODULE_CONTEXTS=1
nodejs tmp.js
Otherwise you cannot use global variables in files included with include(...).
Is it easy/possible to do a simple include('./path/to/file') type of command in node.js?
All I want to do is have access to local variables and run a script. How do people typically organize node.js projects that are bigger than a simple hello world? (A fully functional dynamic website)
For example I'd like to have directories like:
/models
/views
... etc
Just do a require('./yourfile.js');
Declare all the variables that you want outside access as global variables.
So instead of
var a = "hello" it will be
GLOBAL.a="hello" or just
a = "hello"
This is obviously bad. You don't want to be polluting the global scope.
Instead the suggest method is to export your functions/variables.
If you want the MVC pattern take a look at Geddy.
You need to understand CommonJS, which is a pattern to define modules. You shouldn't abuse GLOBAL scope that's always a bad thing to do, instead you can use the 'exports' token, like this:
// circle.js
var PI = 3.14; // PI will not be accessible from outside this module
exports.area = function (r) {
return PI * r * r;
};
exports.circumference = function (r) {
return 2 * PI * r;
};
And the client code that will use our module:
// client.js
var circle = require('./circle');
console.log( 'The area of a circle of radius 4 is '
+ circle.area(4));
This code was extracted from node.js documentation API:
http://nodejs.org/docs/v0.3.2/api/modules.html
Also, if you want to use something like Rails or Sinatra, I recommend Express (I couldn't post the URL, shame on Stack Overflow!)
If you are writing code for Node, using Node modules as described by Ivan is without a doubt the way to go.
However, if you need to load JavaScript that has already been written and isn't aware of node, the vm module is the way to go (and definitely preferable to eval).
For example, here is my execfile module, which evaluates the script at path in either context or the global context:
var vm = require("vm");
var fs = require("fs");
module.exports = function(path, context) {
var data = fs.readFileSync(path);
vm.runInNewContext(data, context, path);
}
Also note: modules loaded with require(…) don't have access to the global context.
If you are planning to load an external javascript file's functions or objects, load on this context using the following code – note the runInThisContext method:
var vm = require("vm");
var fs = require("fs");
var data = fs.readFileSync('./externalfile.js');
const script = new vm.Script(data);
script.runInThisContext();
// here you can use externalfile's functions or objects as if they were instantiated here. They have been added to this context.
Expanding on #Shripad's and #Ivan's answer, I would recommend that you use Node.js's standard module.export functionality.
In your file for constants (e.g. constants.js), you'd write constants like this:
const CONST1 = 1;
module.exports.CONST1 = CONST1;
const CONST2 = 2;
module.exports.CONST2 = CONST2;
Then in the file in which you want to use those constants, write the following code:
const {CONST1 , CONST2} = require('./constants.js');
If you've never seen the const { ... } syntax before: that's destructuring assignment.
Sorry for resurrection. You could use child_process module to execute external js files in node.js
var child_process = require('child_process');
//EXECUTE yourExternalJsFile.js
child_process.exec('node yourExternalJsFile.js', (error, stdout, stderr) => {
console.log(`${stdout}`);
console.log(`${stderr}`);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});