So I've been using NodeJS but I have a heavy background on C and C++, and I would like to know how would I "simulate" the header effect on NodeJS.
I have the following code
foo.js
var Discord = require("discord.js");
var request = require('request');
var http = require('http');
var express = require('express');
var util = require('./dead.js');
util.beef()
then inside the other .js file
dead.js
exports.module = {
beef: function(){ request(something) }
}
I'm trying to make use of the request variable before declared inside foo.js, but it won't work because Node says it doesn't exist (so, ok it went out of scope)
Do I have to require every file I want to use in dead.js?
Would using require impact the performance too much?
When is it preferable to have a long
single js file rather than have multiple ones and require in each
one
You need to require() every file every time you use it.
You cannot share variables directly across files, and this is a good thing (it prevents conflicts).
require() caches everything, so there are no performance concerns.
You should not put everything in a single giant JS file; that would be hard to maintain.
Related
I'm running into an issue with node, which could be adversely effecting the speed of my application.
Essentially my question is what is the proper way to use the same services/dependencies multiple places in my application.
For Example
// db.js File
Contains database connections and schema's
...
//app.js
db = require("db.js")
users = require("user-route.js")
webhooks = require("webhooks-route.js")
andOthers = require("andOthers-route.js")
...
// *-route.js represents all route files
db = require("db.js")
...
As you can see each route imports db.js Does this effect performance, and if so how do you avoid doing this?
Multiple requires of the same module are cached.
In other words, the first time "db.js" is required, it will be loaded and evaluated, and the resulting module object is cached in memory.
Subsequent calls to require("db.js") will just return the already-cached JS object.
This is documented here: Node.js Modules - Caching.
In NodeJS I would like to simply execute a file and allow it to make modifications to the global namespace. I know that this is not the best practice, and if I was designing the project myself, I would make sure that each module exports a single variable.
I am converting a poorly structured SPA project joined by script tags into node, and I would like to do so incrementally.
Right now I have:
require('./three.js')
This is a version of threejs which simply fills a global variable named 'THREE' with the contents of the module. Since the execution of require implicitly creates a closure, a global variable is not created for me.
So what I'd like to do is just run an entire js file and allow it to create global variables.
Is there an elegant way to do this?
You are working on Single Page Application which use the node server to run. So why don't you try module.exports = so that you can see that under the name space you give.
So example
var THREE_74 = require('./three');
Now this THREE_74.HemispherLight()
you can access probably this what you doing.
And if you want to bring your application out of node than use same layout as present ( since you are converting the SPA application) and then create your index.html load the require.js file ( http://requirejs.org/docs/download.html ) into the script tag make all your file than call after this js file ( into those file do the require as you are doing now ).
then run single line server python -m SimpleHTTPServer and run on the browser simple :)
I was able to find this snippet. It isn't native in Node, but it works well as a provisional solution!
var fs = require('fs');
var vm = require('vm');
var includeInThisContext = function(path) {
var code = fs.readFileSync(path);
vm.runInThisContext(code, path);
}.bind(this);
includeInThisContext("<PATH_TO_FILE>");
I have seen that in some projects it is used the common
var myModule = require('myModule');
but in some other cases it is used something like :
require('myModule');
what's the difference between those two ?
One assigns the module to a variable, the other only requires it. Both load and run the script.
With require('foo'), you require the module and load the entry point script. This will evaluate any static code in that script when the module loads for the first time. You do not get access to any exports and cannot reference the module later without requiring it again.
The var bar = require('foo') behaves similarly, except it keeps a reference to the exports and allows you to use them later.
The require-without-assign form is often seen when the "module" is actually some other type of resource, such as a CSS file, and require runs some code to load that CSS into the current page. In common JS modules, without any initialization code, the require-without-assign form will pre-load a module but do little else.
Take a module like:
let connection = new ServerConnection();
export default class Connection {
static getConnection() {
return connection;
}
}
The require-without-assign form will load the script, run it, and create the connection. You won't be able to use it, but it will exist.
The require-with-assign form will load, run, create, and provide a reference. You will be able to call bar.getConnection() and get access to the connection.
In the above example, if you use require without assign, you won't have access to the connection and will never be able to close it, which could be a problem.
I realize that JavaScript is not usually used to copy folders or files, but I'm using a wsf file written in JavaScript for use only on my local system.
I'll give a simplified explanation of the problem I have: I have a folder C:/Program Files/Folder, which has three files in it, File1, File2, and File3. I want to copy only File1 and File2, because File3 is unnecessary for me to copy, and is in use by another process which cannot be killed. (In reality I have a folder with hundreds of files, and I want to copy all of them except for one or two.) Aside from initializing each file and doing fso.fileCopy() on each individual file, is there some way to copy the entire folder, excluding File3? Some kind of exclusion list maybe?
What I have:
var fso = new ActiveXObject("Scripting.FileSystemObject");
var originalFolder = fso.GetFolder("C:\\Program Files\\Folder");
originalFolder.Copy("D:\\Program Files\\Folder");
This would crash, since File3 is in use by a process. I don't want to have to do
var file1 = fso.getFile("C:\\Program Files\\Folder\\File1");
file1.Copy("D:\\Program Files\\Folder\\File1");
var file2 = fso.getFile("C:\\Program Files\\Folder\\File2");
file2.Copy("D:\\Program Files\\Folder\\File2");
for hundreds of files.
I am very new to scripting, so I'm not even sure if it's possible to do something like this in JavaScript.
Javascript supports try { ... } catch (exception) { ... } blocks. While I would highly recommend a language more suited to this sort of local scripting task (Perl, Ruby, Python, and many more), it is possible for you to wrap your file.Copy() call in a try-catch block, catch the exception for files in use, and proceed without having the entire thing crash.
More information on Javascript try-catch blocks here.
It is possible using Node.js (and maybe other JS Frameworks, but I only use Node so I don't know about the others)
var fs = require('fs');
fs.createReadStream('test.log').pipe(fs.createWriteStream('newLog.log'));
I am very new to scripting, so I'm not even sure if it's possible to do something like this in javascript.
It isn't. If javascript could do that, your computer would have 10 billion files containing spam copied to your file system every time you surfed the internet.
I have a need where I need to execute node code/modules in a node app (in a sandbox) with vm.createScript / script.runInNewContext. The host node app runs on heroku, so there is no local filesystem to speak of. I am able to download and run code that has no outside dependencies just fine, however the requirement is to be able to include other node modules as well. (as a build/packaging step would be ideal)
There are many existing solutions (browserify is one I've spent the most time with) which get close... but they inevitably generate a single blob of code (yeah!), meant to execute in a browser (boo!). Browserify for example generates dependencies on window., etc.
Does anyone know of a tool that will either read a package.json dependencies{} (or look at all require()'s in the source) and generate a single monolithic blob suitable for node's runInNewContext?
I don't think the solution you're looking for is the right solution. Basically you want to grab a bunch of require('lib')'s, mush them together into a single Javascript context, serialize that context into source code, then pass that serialized form into the runInNewContext function to deserialize and rebuild into a Javascript context, then deserialize your custom, sandboxed code, and finally run the whole thing.
Wouldn't it make much more sense to just create a context Object that includes the needed require('lib')'s and pass that object directly into your VM? Based on code from the documentation:
var vm = require('vm'),
initSandbox = {
async: require('async'),
http: require('http')
},
context = vm.createContext(initSandbox);
vm.runInContext("async.forEach([0, 1, 2], function(element) { console.log(element); });", context);
Now you have the required libraries accessible via the context without going through a costly serialization/deserialization process.