Let several modules use the same mongo instance - javascript

I'm building a larger web app, where the routes are divided into separate files.
All routes need a connection to the db, and therefore all of them require mongoskin, which is the module I'm using for MongoDb. Like this:
var mongo = require('mongoskin');
But soon after I realised that only require the mongoskin wasn't enough for the routes to be able to talk to the db. Because in my main app.js file I also made additional "configurations".
db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.open(function(err) {
if (!err) {
console.log('Connected to mongodb://localhost/dbName');
}
});
db.bind('clients');
db.bind('invoices');
I needed this db object to be shared aswell...
My first attempt was to wrap the route file in a exported function that takes an argument. This argument is passed in when I require the routes.js in my main app.js. This worked out fine, but I wasn't really fond of this solution... I think it became a bit messy.
My second approach, which I'm using right now, is to make a separate module of the whole db object.
var mongo = require('mongoskin');
var db = null;
module.exports = {
initAndGetDb: function () {
db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.open(function(err) {
if (!err) {
console.log('Connected to mongodb://localhost/dbName');
}
});
db.bind('clients');
db.bind('invoices');
return(db);
},
getDb: function () {
return(db);
}
};
In my main app.js
var db = require('./db').initAndGetDb();
And in my routes.js
var db = require('../db').getDb();
Question: Is this approach a good working solution for sharing a db connection (and maybe other things in a similar fashion)? If you can see any problem with this, please let me know...

Overall I think this is fine, but you could simplify it to just:
//your db.js module
var mongo = require('mongoskin');
var db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.bind('clients');
db.bind('invoices');
db.open(function(err) {
if (err) {
console.error('Could not connect to db', err);
return;
}
console.log('Connected to mongodb://localhost/dbName');
});
module.exports = db;
The first time your code does require("./db");, the top-level code in db.js will run and connect to the db. When other modules require it, they will get access to the db without re-running the top level code and reconnecting.
Note that to be truly ready for production you would need to enhance this with:
get DB connection details from some configuration system (env vars or a helper module)
More robust logging
Graceful handling of disconnects and reconnects while the app is running
Graceful handling of the db being down when the web app starts
retry/backoff logic around connecting/reconnecting
Decide what the webapp does when it can't reach the DB. Show a fail whale page or exit the process.

Related

Async Await and Shareable MongoDB connection

I am currently working on building a backend using Express and MongoDB Native Node. I have been researching attempting to find the best "best practice" for managing connections to a Mongo database and using that connection across the app.
My current solution is working and I get desired results via tests in Postman. I have come up with this after being unable to find concrete answers on handling the connections with MongoDB 3.x (without Mongoose) and still modular.
Would someone be willing to give some feedback on my current solution?
My main concern would be that this setup would not be performant. I suspect it may not be, potentially due to opening/closing connections frequently, but I am not sure if the way I am doing it is good or bad practice.
I created a db.js file to serve my connection:
const assert = require("assert");
const MongoClient = require("mongodb").MongoClient;
const base = process.env.PWD;
const config = require(base + "/config");
let db;
let client;
const connect = async () => {
const url = config.url
const dbName = config.dbName
client = await MongoClient.connect(url)
db = await client.db(dbName)
return db
}
const disconnect = () => {
client.close()
}
module.exports = {
connect: connect,
disconnect: disconnect
}
I then set the routes for my 'todos' in index.js inside of my todos folder. Following a best practice suggestion to have all component files in their own folders(open to feedback on folder structure):
const express = require('express'),
base = process.env.PWD,
router = express.Router(),
todos = require(base + '/todos/todosController')
/* GET All Todos */
router.get('/all', todos.getTodos)
/* GET One Todo */
router.get('/todo/:id', todos.getTodo)
/* POST One Todo */
router.post('/todo/:id', todos.addTodo)
/* DELETE One Todo */
router.delete('/todo/:id', todos.deleteTodo)
module.exports = router;
Lastly the actual todosController.js which requires db.js
This is where I suspect some improvement could happen but I am just not sure. I connect within the routes via async function and await the connection and assign it to db I do my CRUD queries (all currently working properly) and then disconnect at the end.
If this is considered performant and a good practice I am happy with that answer but if there is a way to do this better with current driver and syntax I would be happy for any feedback.
'use strict';
const base = process.env.PWD,
client = require(base + '/db.js'),
assert = require('assert')
let db
const getTodos = async (req, res) => {
db = await client.connect()
const collection = await db.collection('documents')
// Find all todos
collection.find({}).toArray((err, todos) => {
assert.equal(err, null)
res.status(200).json(todos)
})
client.disconnect()
}
It seems that this is a common misconception that opening and closing a connection on every request is more efficient. Opening connection is expensive and this is one of the reasons for the existence of the connection pools. MongoDB supports those and you should consider them.
Here is an article on the subject of Express/MongoDB connection handling which starts right away with:
A common mistake developers make when connecting to the database is to
call MongoClient.connect() in every route handler to get a database
connection.

Is it safe to use a single Mongoose database from two files/processes?

I've been working on a server and a push notification daemon that will both run simultaneously and interact with the same database. The idea behind this is that if one goes down, the other will still function.
I normally use Swift but for this project I'm writing it in Node, using Mongoose as my database. I've created a helper class that I import in both my server.js file and my notifier.js file.
const Mongoose = require('mongoose');
const Device = require('./device'); // This is a Schema
var uri = 'mongodb://localhost/devices';
function Database() {
Mongoose.connect(uri, { useMongoClient: true }, function(err) {
console.log('connected: ' + err);
});
}
Database.prototype.findDevice = function(params, callback) {
Device.findOne(params, function(err, device) {
// etc...
});
};
module.exports = Database;
Then separately from both server.js and notifier.js I create objects and query the database:
const Database = require('./db');
const db = new Database();
db.findDevice(params, function(err, device) {
// Simplified, but I edit and save things back to the database via db
device.token = 'blah';
device.save();
});
Is this safe to do? When working with Swift (and Objective-C) I'm always concerned about making things thread safe. Is this a concern? Should I be worried about race conditions and modifying the same files at the same time?
Also, bonus question: How does Mongoose share a connection between files (or processes?). For example Mongoose.connection.readyState returns the same thing from different files.
The short answer is "safe enough."
The long answer has to do with understanding what sort of consistency guarantees your system needs, how you've configured MongoDB, and whether there's any sharding or replication going on.
For the latter, you'll want to read about atomicity and consistency and perhaps also peek at write concern.
A good way to answer these questions, even when you think you've figured it out, is to test scenarios: Hammer a duplicate of your system with fake data and events and see if what happen is OK or not.

Node.js: initiate pg.Pool()

As per example (db.js)
const pg = require('pg');
const client_config = {...};
const pool = new pg.Pool(client_config);
pool.on('error', function(err, client) {
console.error('idle client error', err.mesae, err.stack);
});
module.exports.query = function(text, values, callback) {
return pool.query(text, values, callback);
};
module.exports.connect = function(callback) {
return pool.connect(callback);
};
and within an express (generated) application, do I have to initiate/require the Pool (db.js) in my app.js/on app start-up or do I simply require the db.js within my data models (respectively required in my routes)? Intuitively I would initiate the Pool on start-up rather than on each connection to the routes to avoid multiple initiations, but I am fairly new to Node.js.
Scroll a little further -- there are usage examples.
The reason this works is thanks to Node's module caching. The first time db.js is required, all the init code executes immediately. Subsequent require calls return the already-initialized module from the cache so the pool is already connected. In Express, you can avoid requiring db.js all over the place by using app.set('db', db); to attach the module to the Express application. You can then invoke req.app.get('db').query(...) in your route code.
If your data needs are complex enough to involve models, you may want to look into higher-level data access libraries since pg is more of a driver (think JDBC if you've done any Java). There are a lot of options ranging from minimal data mappers (I maintain MassiveJS) to query builders (Knex) to full-scale ORMs (Sequelize).

nodejs modules code execution

need a little assistance with an understanding nodejs code organization,
so I'm from C++ world and suppose that didn't understand a principles.
So I need to implement some js module which should connect to MongoDB and exports a few methods for other modules: e.g. insert, update, delete.
when I write something like:
var db = MongoClient.connect(config.connectionString, {native_parser:true},function (err, db) {...});
exports.insert = function(a, b) {
// db using
//...
};
I suppose that "db" local static variable and will be initialized in any case. at the time of call "require('this module') " but seems it's not so, and db is uninitialized at the time of the call of exported functions? another question - I suppose this should be implemented using "futures" (class from c++, didn't find an analogue from js) to guaratee that db object is copmpletely constructed at the moment of the using??
So the problem I see is that you want to use DB but since DB is returned async, it may or may not be available in the exported function, hence you need to convert the connect from async to sync.
Since MongoDB driver cannot do sync, i suggest you use a wrapper, i suggest mongoskin.
https://github.com/kissjs/node-mongoskin
var mongo = require('mongoskin');
var db = mongo.db(config.connectionString, {native_parser:true});
Now this should work for you.
I had worked with C++, Java before (sometime back, not now) and now working in nodejs. I think I understood your question. Here are some key points.
Yes, Nodejs modules are somewhat like classes that they encapsulate the variables and you access only through public methods (exposed through exports). I think you are aware that there is no class implementation at all here, but it loosely maps to the behaviour.
The key difference in nodejs is the asynchronous nature of resource instantiation. By this, I mean if there are 2 statements stmt1 and stmt2, if stmt1 is called and takes time, nodejs does not wait for it to end (that is synchronous behaviour), instead it moves on to stmt2. In pre-nodejs world, we assume that reaching stmt2 means stmt1 is complete.
So, what is the workaround? How to ensure you do something after db connection is obtained. If your code is not making db calls immediately, you could assume that connection will be through. Or if you immediately want to invoke db, you write the code on a callback. Mongo exposes events called 'open' and 'error'. You can use this to ensure connection is open. Also it is best practise to track error event.
db.on('error', console.error.bind(console, 'connection error'));
db.once('open', function callback() {
console.log("Connection with database succeeded.");
// put your code
});
I am not aware of C++ future and so cannot comment on that.
Hope this helps !
[Updated] To add example
You could have db.js for setting up db connection and expose Mongoose object to create models.
'use strict';
var Mongoose = require('mongoose'),
Config = require('./config');
Mongoose.connect(Config.database.url);
var db = Mongoose.connection;
db.on('error', console.error.bind(console, 'connection error'));
db.once('open', function callback() {
console.log("Connection with database succeeded.");
});
exports.Mongoose = Mongoose;
exports.db = db;
You can include db.js in the server.js like
var DB = require('db.js');
which will do the initialisation.
You then use mongoose (mongoose is a Object relational mapper to work with mongo and highly recommended) to get models of database objects as shown below.
//userModel.js
var mongoose = require('mongoose'),
Schema = mongoose.Schema,
var UserSchema = new Schema({
uid : {type : Number, required: false}
, email : {type : String, get: toLower, set: toLower, required: true, index: { unique: true } }
, passwd : {type : String, required: false}
);
var user = mongoose.model('user', UserSchema);
module.exports = {
User : user
};
For more information on mongoose, you can refer http://mongoosejs.com
The db is generally not closed as I use in web environment and is always on. There is db connection pooling maintained and connections are reused optimally. I saw noticed a thread in SO which adds more details. Why is it recommended not to close a MongoDB connection anywhere in Node.js code?

Socket.IO Scoping Issue (Node.JS)

I am working on a node.js project that I am leveraging Socket.IO in, and am having an issue getting my head around a scoping issue. Here is what I am trying to do:
var io = require('socket.io').listen(80);
session_manager = require('./includes/session_manager');
// client joins the socket server
io.sockets.on('connection', function(client) {
client.on('X.Session.join', function(session_id, client) {
session_manager.joinSession(session_id, function(err, session) {
// do whatever
});
});
// BRING IN MORE LISTENERS/EMITTERS?
require('someModuleIBuild');
});
As you can see I am basically setting up the initial connection, joining a session via a managing class (so I know who to emit to for which session) and then I am trying to dynamically bring in some custom stuff that ALSO is going to be emitting and listening via the socket connection.
So how do I reference this current connection from within the confines of my custom modules? All the examples I have seen have all the "on" and "emit" functions in one file, which seems like it could get out of control pretty quickly.
I am possibly over-thinking/over-complicating this (this is my first node.js project, first socket-based project, first mostly-javascript project....etc) but any help would be appreciated.
create your modules like this and you can pass the client into the module
module.exports = function(client) {
client.on("whatever", function () {
});
client.on("whenever", function (data) {
});
};
and then do the require like this
require('someModuleIBuild')(client);

Categories