changing exports.X in a function seems to not work...
I want to be able to load settings from a file & access them in Node.js. I have this currently, however, the clients connecting to my node application can edit what's in the settings file. Unfortunately as it stands the Node application has to be restarted for the changes to take effect. Is there a way I can reload the module.exports on the fly?
EDIT:
Settings file is literally a JSON string.
My settings module is 'required' in almost every single file, and there's a lot of files... So reloading it per-file basis is out of the question. I do, however, know precisely when someone makes a change to the settings.
If you are using require to load the settings and only referencing the settings from one module, then doing something along the lines of:
delete require.cache[require.resolve(filename)];
will work for you.
If, on the other hand, multiple modules will be referencing these settings, that approach can become a bit unwieldy and open you up to unforeseen bugs. For example, if any of the modules are holding on to a reference to the required settings file, they would each need to somehow learn that the settings had changed and update their references.
To alleviate (not completely solve) the caching issue, you build your settings interface so that users of it must access either the settings object via a function and/or require that properties are accessed via functions. Even with this model, someone may still decide to cache a setting causing an obscure failure later down the road.
Using the simplest approach of a single getter for the settings object would look something like this:
var settings = require('./settings.json');
// ... watch for changes and reload by invalidating node's cache
module.exports = function() { return settings; }
Usage:
var settings = require('./path/to/settings');
settings().foo;
There are several libraries that do settings. Depending on your needs, I'm partial to nconf.
I'd set up a file watcher here that checks for changes of a JSON file dynamically. It is not recommended practice to change a JS script once the app is running.
Something like:
var _ = require("lodash");
var fs = require("fs");
var result = {};
fs.watch('my-settings.json',function(event,filename){
fs.readFile(filename,function(err,data){
if(err){
// your error catching
}
_.extend(result,JSON.parse(data));
});
});
module.exports = result;
Now, this comes with lots of caveats, first that fs.watch is not always supported by all platforms.
http://nodejs.org/api/fs.html#fs_fs_watch_filename_options_listener
Second, that it's really awkward to change a property like this. The expectation is generally that exports of module not mutate. I'd instead recommend exposing a method whose result can change based on the state of the file, a getter for the resulting data.
Third, a file watcher can be expensive, memory-wise.
This is better code, IMHO:
var _ = require("lodash");
var fs = require("fs");
var filename = 'my-settings.json';
var lastModified;
var mySetting;
module.exports = {
getSettingAsync : function (callback) {
fs.stat(filename,function(err,stat){
if(stat.mtime == lastModified) {
callback(mySetting);
} else {
fs.readFile(filename,function(err,data){
if(err){
// your error catching
}
// this assumes that your data is always correct
mySetting = JSON.parse(data).mySetting;
callback(mySetting);
});
}
});
}
};
In this case, we both check for a JSON file, and expose this as an async method. You could just as easily change the code to use the sync versions if need be and return the value instead of invoking the callback. This version checks when the file was changed, which is cheaper than reading the whole file every time, reads the file if newer and saves you the need to use a potentially buggy file watcher.
By the way, I've not tested this code and it may contain errors as is, but the concept is sound.
But, perhaps the more salient question, why not just store that value in the database?
Related
I'm using Webdriver.io to run tests on a large number of pages. Because all the specs for the pages are in a JSON file, I have a special class that sets up the test. It looks like this:
module.exports = class PageTester {
suiteName = '';
browser = {};
constructor (suiteName, browser) {
this.suiteName = suiteName;
this.browser = browser;
}
testModel(currentModel) {
describe(this.suiteName + ' endpoint ' + currentModel.url, () => {
this.browser.url(currentModel.url);
/* it() statements for the test */
});
}
}
Then in my specs folder I have a file that loads the JSON and plugs it into the PageTester class, like this:
const PageTester = require('../modules/PageTester');
const models = require('/path/to/some/file.json');
const pageTester = new PageTester('Some Name', browser);
for (const modelName in models) {
pageTester.testModel(models[modelName]);
}
When I run this code, WebdriverIO gives me the following warning:
WARN #wdio/mocha-framework: Unable to load spec files quite likely because they rely on `browser` object that is not fully initialised.
`browser` object has only `capabilities` and some flags like `isMobile`.
Helper files that use other `browser` commands have to be moved to `before` hook.
Spec file(s): /suite/test/specs/test.js
All the tests seem to run fine, so I don't actually understand what this warning is complaining about and what negative consequences ignoring it may have. So I would like to a) understand why this is happening and b) how it would be possible to get rid of this warning given the way my code is set up.
In my case, I resolve it by fixing the path for the require files. I noticed that my path was wrong. But the error that wdio throws is not really helpful. :/
you can only interact with browser object inside it blocks because it is not fully accessible before the browser session is started.
See https://webdriver.io/blog/2019/11/01/spec-filtering.html for details.
You simply should ensure your spec file and respective page file are kept on a similar folder structure.
I'm currently running a heavy computation (i.e. generating a Monte Carlo tree), which is an expensive operation. I only have a few seconds to build as big of a tree as I can, so I am using subprocesses in Node.js in order to build multiple trees, and then aggregate their data together to make a more informed decision.
I understand that subprocesses do not share information/memory, and I need to use modules within these subprocesses that are located in a file, called "Epilog.js" on my machine.
When I run functions that are in epilog.js from the main file, it works just fine. But all of my functions that are in my worker threads return absolutely nothing.
I have tested to make sure that the parameters of the functions I am trying to use in "epilog.js" aren't empty, and they're not. The problem isn't in the parameter.
I have also tested to see what happens if I simply don't import, and instead of just outputting an undefined array, I get an error saying that there is no function called "findroles".
//My main thread.
var fs = require('fs');
eval(fs.readFileSync('epilog.js') + '');
var process = fork('./buildGraph.js');
process.send({library});
//My worker thread.
//buildGraph.js
var fs = require('fs');
eval(fs.readFileSync('epilog.js') + '');
// receive message from master process
process.on('message', async(message) => {
library = message["library"];
console.log(findroles(library));
// findroles(library) is a function that is defined in epilog.js,
//and this outputs an array of "roles" given a parameter,library.
// For some reason this function outputs [], rather than giving me
// all of the roles. If I run this exact line from my main thread,
// it doesn't give any errors and outputs the right array:
// e.g. ['red', 'white'].
});
I expect to get not the empty array, but [red, white], as I do if I were to run the same line in the main thread. Does anyone have an idea as to the inconsistency of the functions? I'm very new to node.js and this isn't a class focused too much on software engineering in JavaScript, so I'd appreciate if someone can dumb down what is going on, as this is all very new to me.
If your script does not find the function called findroles then there is a problem with the importing method. Using the eval function for importing is not the normal way of importing modules. Try something like this:
// buildGraph.js
const epilog = require("./epilog.js");
......
console.log(epilog.findroles(library));
then epilog.js
exports.findroles = function (library) {
// function content
}
You can find more info here:
https://www.w3schools.com/nodejs/nodejs_modules.asp
Base on the document and example here, everything seem correct but I think the problem come from this line:
var process = fork('./buildGraph.js');
you might override the original process.
try to change it to
const n = fork('./buildGraph.js');
This is NOT a dupe of this question. This question is NOT about Windows. It's a general question across OSes.
Is there an efficient way to get the correct case of a filename in node.js other than getting the directory and finding the matching name?
Example: Assume I have a folder with 3 files
+-someFolder
+-fooBar.txt
+-Moo.txt
+-ReadMe.txt
I want a function that passed somefolder/readme.txt returns someFolder/ReadMe.txt.
AFAICT the only way to do that is to call fs.readDir or fs.readDirSync and see if there is a matching file, something like
const fs = require('fs');
const path = require('path');
function getActualFilename(filename) {
if(!fs.existsSync(filename)) {
throw new Error(`${filename} does not exist`);
}
return getActualFilenameImpl(filename);
}
function getActualFilenameImpl(filename) {
const lcFilename = path.basename(filename).toLowerCase();
// handles passing in `c:\\`
if (!lcFilename) {
return filename.toUpperCase();
}
const dirname = path.dirname(filename);
let filenames;
try {
filenames = fs.readdirSync(dirname);
} catch (e) {
// we already verified the path exists above so if this
// happens it means the OS won't let use get a listing (UNC root on windows)
// so it's the best we can do
return filename;
}
const matches = filenames.filter(name => lcFilename === name.toLowerCase());
if (!matches.length) {
throw new Error(`${filename} does not exist`);
}
const realname = matches[0];
if (dirname !== '.') {
if (dirname.endsWith('/') || dirname.endsWith('\\')) {
return path.join(dirname, realname);
} else {
return path.join(getActualFilenameImpl(dirname), realname);
}
} else {
return realname;
}
}
The code above is pretty hacky. Trying in on different things has made it clear there's lots of edge cases. On Windows in particular UNC paths fail since you can't call fs.readdirSync once you get to the network path root. I have no idea what functions to call to figure out where that path separates and then how to get the correct case path for that which is probably an entirely separate set of Windows API calls (like calling whatever functions net use uses to show shares) etc...
I did notice path.dirname stops removing the trailing slash when it gets to a UNC path so using that to try to figure out when stop trying.
Notes:
I get that for example on Linux (and optionally on Mac) the file system may be case sensitive and I'd have to check for that but I'm mostly concerned with Windows and standard macOS and will deal with case-sensitive issues later.
I also get that JavaScript's toLowerCase might not match the OSes concept of case insensitivity so if there is a solution that takes that into account that would also be great!
I get that I could cache results or directory listing for a speed up but was wondering if there is some other function to use that doesn't read the entire directory listing.
I'm actually trying to solve several problems and am open to other suggestions
Problem 1: What filename to store in an app specific database. It seems best to store the actual filename. See #3
Problem 2: Figuring out if 2 filenames reference the same file/folder. So if the user specifies SomeFolder/foobar.txt and somefolder/FOOBAR.txt I don't want that to appear as 2 separate files if they are actually the same file. I need my app to know they reference the same file. I think I can call fs.stat for this and check if the ino field matches?
Problem 3: Related to problem 1, reloading metadata related to the file. If the user specifies SomeFolder/foobar.txt at some point and my app generates metadata related to the file, then at some other point in time they specify somefolder/FOOBAR.txt I need to find the matching metadata. My current thinking is by looking up the actual filename and using that to match with this problem would be solved. Although I suppose if they rename the file from FooBar.txt to foobar.txt it would lose the metadata. Not sure I care about that situation though since if they rename from FooBar.txt to SomethingElse.txt I definitely do not care if I lose the metadata.
That said, maybe I should store the ino as the key in my DB? Not sure I'm comfortable with that idea yet but it's a possibility and would love to know if others do that. Some checking reveals that at least on macOS the ino stays the same across moves and rename on the same drive which would be a good thing for my use case. On the other hand I'd assume ino is only valid per file system so if I have 2 different drives mounted I could get clashing inos. I could use dev and ino as a key as in
const stat = fs.statSync(filename);
const key = `${stat.dev}:${stat.ino}`;
Though I have no idea if stat.dev is always the same with removable storage. I assume it's not. So it seems like filename as key is probably better?
As long as the filesystem doesn't keep a connection between files with the same names in different cases (and I don't know any such filesystem) there can't be a solution other than scanning the directory because there is simply no API provided for this at all at any level.
So you have to either scan manually as you already suggested or by using libraries like glob to find files while ignoring case.
But you say you also have the filenames in a database. So if you can make sure that the filenames in the DB are exactly matching the filenames in the filesystem then you should be able to find the files in different cases by doing case-insensitive DB queries. If it is an SQL database then it should already provide this functionality. If it is a more primitive data store you may add another filename property which is always lower-case so you can match against this to find the real file.
I read a lot about Express / SocketIO and that's crazy how rarely you get some other example than a "Hello" transmitted directly from the app.js. The problem is it doesn't work like that in the real world ... I'm actually desperate on a logic problem which seems far away from what the web give me, that's why I wanted to point this out, I'm sure asking will be the solution ! :)
I'm refactoring my app (because there were many mistakes like using the global scope to put libs, etc.) ; Let's say I've got a huge system based on SocketIO and NodeJS. There's a loader in the app.js which starts the socket system.
When someone join the app it require() another module : it initializes many socket.on() which are loaded dynamically and go to some /*_socket.js files in a folder. Each function in those modules represent a socket listener, then it's way easier to call it from the front-end, might look like this :
// Will call `user_socket.js` and method `try_to_signin(some params)`
Queries.emit_socket('user.try_to_signin', {some params});
The system itself works really well. But there's a catch : the module that will load all those files which understand what the front-end has sent also transmit libraries linked with req/res (sessions, cookies, others...) and must do it, because the called methods are the core of the app and very often need those libraries.
In the previous example we obviously need to check if the user isn't already logged-in.
// The *_socket.js file looks like this :
var $h = require(__ROOT__ + '/api/helpers');
module.exports = function($s, $w) {
var user_process = require(__ROOT__ + '/api/processes/user_process')($s, $w);
return {
my_method_called: function(reference, params, callback) {
// Stuff using $s, $w, etc.
}
}
// And it's called this way :
// $s = services (a big object)
// $w = workers (a big object depending on $s)
// They are linked with the req/res from the page when they are instantiated
controller_instance = require('../sockets/'+ controller_name +'_socket')($s, $w);
// After some processes ...
socket_io.on(socket_listener, function (datas, callback) {
// Will call the correct function, etc.
$w.queries.handle_socket($w, controller_name, method_name, datas);
});
The good news : basically, it works.
The bad news : every time I refresh the page, the listeners double themselves because they are in a loop called on page load.
Below, this should have been one line :
So I should put all the socket.on('connection'...) stuff outside the page loading, which means when the server starts ... Yes, but I also need the req/res datas to be able to load the libraries, which I get only when the page is loaded !
It's a programing logic problem, I know I did something wrong but I don't know where to go now, I got this big system which "basically" works but there's like a paradox on the way I did it and I can't figure out how to resolve this ... It's been a couple of hours I'm stuck.
How can I refacto to let the possibility to get the current libraries depending on req/res within a socket.on() call ? Is there a trick ? Should I think about changing completely the way I did it ?
Also, is there another way to do what I want to do ?
Thank you everyone !
NOTE : If I didn't explain well or if you want more code, just tell me :)
EDIT - SOLUTION : As seen above we can use sockets.once(); instead of sockets.on(), or there's also the sockets.removeAllListeners() solution which is less clean.
Try As Below.
io.sockets.once('connection', function(socket) {
io.sockets.emit('new-data', {
channel: 'stdout',
value: data
});
});
Use once instead of on.
This problem is similar as given in the following link.
https://stackoverflow.com/questions/25601064/multiple-socket-io-connections-on-page-refresh/25601075#25601075
Is there some sort of Session-like variable to hold an array in Nodejs?
What I meant is like something where I can define the name in other scope and be accessed in different scope (i.e: Variable("Array1") is defined in function A but accessed in function B and persists until it is destroyed).
The reason is I am using Meteor for slicing big files into small blobs and pass it back the chunk to the server. I tried to use the combination of fs.WriteFile and fs.AppendFile but somehow the file is mutilated along the way (the file is a video and playback error occurred with the copied file).
I read somewhere that blob can be rebuild by the constructor. However, I would need to pass this to a global or session-like variable in order to do so.
So...how can I use such thing in Nodejs?
There is such thing – it is called database :-)
When you're in Meteor, all files are loaded to a single running environment. Therefore, unlike in plain Node, a global variable created in one file can be accessed in any other one. So you can write
Slices = {};
in one file, and then in another say
Slices['Array1'] = ...
Notice there is no var keyword when defining the Slices object, otherwise it wouldn't be global but scoped to the file.
There is obviously one problem with the above method, and it's persistence over server reload. When the server crashes and restarts, or when you upload a new version, all such variables are recreated and you lose your data.
To prevent this, you need to store your variables in a place where they are retained permanently – a database of some kind. There are several solutions tailored for such runtime variables (such as Redis), but since you're using Meteor the natural solution would be to use the provided Mongo database. So just create a new collection on the server side
Slices = new Meteor.Collection('slices');
and use the usual find, insert, update and remove methods to access your variables.
If everything happens in the same process space, you can use a module as a singleton.
Remember, even if a module is included multiple times, the same copy is returned.
So if you have this module:
module.exports = new Array();
And you include it by several other modules, each one of them will have the same array instance.
You can also have a more complex singleton:
var blocks = {};
module.exports.addBlock = function(name, block) {
blocks[name] = block;
};
module.exports.getBlock = function(name) {
return blocks[name];
};
module.exports.delBlock = function(name) {
delete blocks[name];
};
module.exports.list = function() {
return Object.keys(blocks);
};
In your different files, you would include and use this module like:
var blocks = require('./blocks');
console.log(blocks.list());
Read about module caching here.