Node.js: initiate pg.Pool() - javascript

As per example (db.js)
const pg = require('pg');
const client_config = {...};
const pool = new pg.Pool(client_config);
pool.on('error', function(err, client) {
console.error('idle client error', err.mesae, err.stack);
});
module.exports.query = function(text, values, callback) {
return pool.query(text, values, callback);
};
module.exports.connect = function(callback) {
return pool.connect(callback);
};
and within an express (generated) application, do I have to initiate/require the Pool (db.js) in my app.js/on app start-up or do I simply require the db.js within my data models (respectively required in my routes)? Intuitively I would initiate the Pool on start-up rather than on each connection to the routes to avoid multiple initiations, but I am fairly new to Node.js.

Scroll a little further -- there are usage examples.
The reason this works is thanks to Node's module caching. The first time db.js is required, all the init code executes immediately. Subsequent require calls return the already-initialized module from the cache so the pool is already connected. In Express, you can avoid requiring db.js all over the place by using app.set('db', db); to attach the module to the Express application. You can then invoke req.app.get('db').query(...) in your route code.
If your data needs are complex enough to involve models, you may want to look into higher-level data access libraries since pg is more of a driver (think JDBC if you've done any Java). There are a lot of options ranging from minimal data mappers (I maintain MassiveJS) to query builders (Knex) to full-scale ORMs (Sequelize).

Related

Async Await and Shareable MongoDB connection

I am currently working on building a backend using Express and MongoDB Native Node. I have been researching attempting to find the best "best practice" for managing connections to a Mongo database and using that connection across the app.
My current solution is working and I get desired results via tests in Postman. I have come up with this after being unable to find concrete answers on handling the connections with MongoDB 3.x (without Mongoose) and still modular.
Would someone be willing to give some feedback on my current solution?
My main concern would be that this setup would not be performant. I suspect it may not be, potentially due to opening/closing connections frequently, but I am not sure if the way I am doing it is good or bad practice.
I created a db.js file to serve my connection:
const assert = require("assert");
const MongoClient = require("mongodb").MongoClient;
const base = process.env.PWD;
const config = require(base + "/config");
let db;
let client;
const connect = async () => {
const url = config.url
const dbName = config.dbName
client = await MongoClient.connect(url)
db = await client.db(dbName)
return db
}
const disconnect = () => {
client.close()
}
module.exports = {
connect: connect,
disconnect: disconnect
}
I then set the routes for my 'todos' in index.js inside of my todos folder. Following a best practice suggestion to have all component files in their own folders(open to feedback on folder structure):
const express = require('express'),
base = process.env.PWD,
router = express.Router(),
todos = require(base + '/todos/todosController')
/* GET All Todos */
router.get('/all', todos.getTodos)
/* GET One Todo */
router.get('/todo/:id', todos.getTodo)
/* POST One Todo */
router.post('/todo/:id', todos.addTodo)
/* DELETE One Todo */
router.delete('/todo/:id', todos.deleteTodo)
module.exports = router;
Lastly the actual todosController.js which requires db.js
This is where I suspect some improvement could happen but I am just not sure. I connect within the routes via async function and await the connection and assign it to db I do my CRUD queries (all currently working properly) and then disconnect at the end.
If this is considered performant and a good practice I am happy with that answer but if there is a way to do this better with current driver and syntax I would be happy for any feedback.
'use strict';
const base = process.env.PWD,
client = require(base + '/db.js'),
assert = require('assert')
let db
const getTodos = async (req, res) => {
db = await client.connect()
const collection = await db.collection('documents')
// Find all todos
collection.find({}).toArray((err, todos) => {
assert.equal(err, null)
res.status(200).json(todos)
})
client.disconnect()
}
It seems that this is a common misconception that opening and closing a connection on every request is more efficient. Opening connection is expensive and this is one of the reasons for the existence of the connection pools. MongoDB supports those and you should consider them.
Here is an article on the subject of Express/MongoDB connection handling which starts right away with:
A common mistake developers make when connecting to the database is to
call MongoClient.connect() in every route handler to get a database
connection.

Is it safe to use a single Mongoose database from two files/processes?

I've been working on a server and a push notification daemon that will both run simultaneously and interact with the same database. The idea behind this is that if one goes down, the other will still function.
I normally use Swift but for this project I'm writing it in Node, using Mongoose as my database. I've created a helper class that I import in both my server.js file and my notifier.js file.
const Mongoose = require('mongoose');
const Device = require('./device'); // This is a Schema
var uri = 'mongodb://localhost/devices';
function Database() {
Mongoose.connect(uri, { useMongoClient: true }, function(err) {
console.log('connected: ' + err);
});
}
Database.prototype.findDevice = function(params, callback) {
Device.findOne(params, function(err, device) {
// etc...
});
};
module.exports = Database;
Then separately from both server.js and notifier.js I create objects and query the database:
const Database = require('./db');
const db = new Database();
db.findDevice(params, function(err, device) {
// Simplified, but I edit and save things back to the database via db
device.token = 'blah';
device.save();
});
Is this safe to do? When working with Swift (and Objective-C) I'm always concerned about making things thread safe. Is this a concern? Should I be worried about race conditions and modifying the same files at the same time?
Also, bonus question: How does Mongoose share a connection between files (or processes?). For example Mongoose.connection.readyState returns the same thing from different files.
The short answer is "safe enough."
The long answer has to do with understanding what sort of consistency guarantees your system needs, how you've configured MongoDB, and whether there's any sharding or replication going on.
For the latter, you'll want to read about atomicity and consistency and perhaps also peek at write concern.
A good way to answer these questions, even when you think you've figured it out, is to test scenarios: Hammer a duplicate of your system with fake data and events and see if what happen is OK or not.

Real time notifications node.js

I'm developing a calendar application with Node.js, express.js and Sequelize.
The application is simple, you can create tasks in your calendar, but you can also assign some tasks to others users of the system
I need to create a notification system with socket.io, but I don't have experience with websockets. My big doubt is how can I make my server send a notification to the user that you assign the task?
My ports configurations is on a folder called bin/www, my express routes are defined on a file called server.js
Any Idea?
I want to introduce you to ready to use backend system that enables you to easily build modern web applications with cool functionalities:
Persisted data: store your data and perform advanced searches on it.
Real-time notifications: subscribe to fine-grained subsets of data.
User Management: login, logout and security rules are no more a burden.
With this, you can focus to your main application development.
You can look at Kuzzle, wich is one project I working on:
First, start the service:
http://docs.kuzzle.io/guide/getting-started/#running-kuzzle-automagically
Then in your calendar application you can the javascript sdk
At this point you can create a document:
const
Kuzzle = require('kuzzle-sdk'),
kuzzle = new Kuzzle('http://localhost:7512');
const filter = {
equals: {
user: 'username'
}
}
// Subscribe every changes in calendar collection containing a field `user` equals to `username`
kuzzle
.collection('calendar', 'myproject')
.subscribe(filter, function(error, result) {
// triggered each time a document is updated/created !
// Here you can display a message in your application for instance
console.log('message received from kuzzle:', result)
})
// Each time you have to create a new task in your calendar, you can create a document that represent your task and persist it with kuzzle
const task = {
date: '2017-07-19T16:07:21.520Z',
title: 'my new task',
user: 'username'
}
// Creating a document from another app will notify all subscribers
kuzzle
.collection('calendar', 'myproject')
.createDocument(task)
I think this can help you :)
Documents are served though socket.io or native websockets when available
Don't hesitate to ask question ;)
As far as I can understand you need to pass your socket.io instance to other files, right ?
var sio = require('socket.io');
var io = sio();
app.io = io;
And you simply attach it to your server in your bin/www file
var io = app.io
io.attach(server);
Or what else I like to do, is adding socket.io middleware for express
// Socket.io middleware
app.use((req, res, next) => {
req.io = io;
next();
});
So you can access it in some of your router files
req.io.emit('newMsg', {
success: true
});

RabbitMQ amqp.node integration with nodejs express

The official RabbitMQ Javascript tutorials show usage of the amqp.node client library
amqp.connect('amqp://localhost', function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'hello';
ch.assertQueue(q, {durable: false});
// Note: on Node 6 Buffer.from(msg) should be used
ch.sendToQueue(q, new Buffer('Hello World!'));
console.log(" [x] Sent 'Hello World!'");
});
});
However, I find it's hard to reuse this code elsewhere. In particular, I don't know how to exports the channel object since it's in a callback. For example in my NodeJs/Express App:
app.post('/posts', (req, res) => {
-- Create a new Post
-- Publish a message saying that a new Post has been created
-- Another 'newsfeed' server consume that message and update the newsfeed table
// How do I reuse the channel 'ch' object from amqp.node here
});
Do you guys have any guidance on this one? Suggestion of other libraries is welcomed (Since I'm starting out, ease of use is what I considered the most important)
amqp.node is a low-level API set that does minimal translation from AMQP to Node.js. It's basically a driver that should be used from a more friendly API.
If you want a DIY solution, create an API that you can export from your module and manage the connection, channel and other objects from within that API file.
But I don't recommend doing it yourself. It's not easy to get things right.
I would suggest using a library like Rabbot (https://github.com/arobson/rabbot/) to handle this for you.
I've been using Rabbot for quite some time now, and I really like the way it works. It pushes the details of AMQP off to the side and lets me focus on the business value of my applications and the messaging patterns that I need, to build featurs.
As explained in the comments, you could use the module.exports to expose the newly created channel. Of course this will be overridden each time you create a new channel, unless you want to keep an array of channels or some other data structure.
Assuming this is in a script called channelCreator.js:
amqp.connect('amqp://localhost', function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'hello';
ch.assertQueue(q, {durable: false});
//this is where you can export the channel object
module.exports.channel = ch;
//moved the sending-code to some 'external script'
});
});
In the script where you may want to use the "exported" channel:
var channelCreator = require("<path>/channelCreator.js");
//this is where you can access the channel object:
if(channelCreator.channel){
channelCreator.channel.sendToQueue('QueueName', new Buffer('This is Some Message.'));
console.log(" [x] Sent 'Message'");
}
Hope this helps.

Let several modules use the same mongo instance

I'm building a larger web app, where the routes are divided into separate files.
All routes need a connection to the db, and therefore all of them require mongoskin, which is the module I'm using for MongoDb. Like this:
var mongo = require('mongoskin');
But soon after I realised that only require the mongoskin wasn't enough for the routes to be able to talk to the db. Because in my main app.js file I also made additional "configurations".
db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.open(function(err) {
if (!err) {
console.log('Connected to mongodb://localhost/dbName');
}
});
db.bind('clients');
db.bind('invoices');
I needed this db object to be shared aswell...
My first attempt was to wrap the route file in a exported function that takes an argument. This argument is passed in when I require the routes.js in my main app.js. This worked out fine, but I wasn't really fond of this solution... I think it became a bit messy.
My second approach, which I'm using right now, is to make a separate module of the whole db object.
var mongo = require('mongoskin');
var db = null;
module.exports = {
initAndGetDb: function () {
db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.open(function(err) {
if (!err) {
console.log('Connected to mongodb://localhost/dbName');
}
});
db.bind('clients');
db.bind('invoices');
return(db);
},
getDb: function () {
return(db);
}
};
In my main app.js
var db = require('./db').initAndGetDb();
And in my routes.js
var db = require('../db').getDb();
Question: Is this approach a good working solution for sharing a db connection (and maybe other things in a similar fashion)? If you can see any problem with this, please let me know...
Overall I think this is fine, but you could simplify it to just:
//your db.js module
var mongo = require('mongoskin');
var db = mongo.db('mongodb://localhost/dbName', {native_parser:true});
db.bind('clients');
db.bind('invoices');
db.open(function(err) {
if (err) {
console.error('Could not connect to db', err);
return;
}
console.log('Connected to mongodb://localhost/dbName');
});
module.exports = db;
The first time your code does require("./db");, the top-level code in db.js will run and connect to the db. When other modules require it, they will get access to the db without re-running the top level code and reconnecting.
Note that to be truly ready for production you would need to enhance this with:
get DB connection details from some configuration system (env vars or a helper module)
More robust logging
Graceful handling of disconnects and reconnects while the app is running
Graceful handling of the db being down when the web app starts
retry/backoff logic around connecting/reconnecting
Decide what the webapp does when it can't reach the DB. Show a fail whale page or exit the process.

Categories