DDD: using db layer as a singleton, could this be problematic - javascript

I recently downloaded a ddd boilerplate to get an example of a javascript functional approach to ddd. As I went through and analyzed the code I reached the awilix ioc setup and am a little confused on this approach. The code below is directly from the container.js file (https://github.com/joshuaalpuerto/node-ddd-boilerplate):
const { createContainer, asValue, asFunction } = require('awilix')
// you can do this
const app = require('./app')
const server = require('./interfaces/http/server')
const router = require('./interfaces/http/router')
const auth = require('./interfaces/http/auth')
const config = require('../config')
const logger = require('./infra/logging/logger')
const database = require('./infra/database')
const jwt = require('./infra/jwt')
const response = require('./infra/support/response')
const date = require('./infra/support/date')
const repository = require('./infra/repositories')
const container = createContainer()
// SYSTEM
container
.register({
app: asFunction(app).singleton(),
server: asFunction(server).singleton(),
router: asFunction(router).singleton(),
logger: asFunction(logger).singleton(),
database: asFunction(database).singleton(),
auth: asFunction(auth).singleton(),
jwt: asFunction(jwt).singleton(),
response: asFunction(response).singleton(),
date: asFunction(date).singleton(),
config: asValue(config),
repository: asFunction(repository).singleton()
})
module.exports = container
If endpoints are being accessed that access the db layer via a repository layer, should the database layer be a singleton? I would think this could open the door to unwanted side effects if all users are accessing the same db connection. Granted, I am probably missing something as this is my first time seeing awilix being used but it sure looks like only one instance of the database layer is being utilized. Am I missing something here and is this actually safe or could this really lead to issues?

Remember that NodeJS is single-threaded. There is only one node process running, so having a singleton is not bad. This is what saves machine resources over other architectures (like Java) where each request is it's own thread and memory etc. The single instance is just the means of accessing the DB.
Also remember that NodeJS is good at lots of I/O happening at once. It takes quite a bit to slow the whole thing down. Even though it is a DB singleton, that may not be an issue for your application. There may never be a bottleneck at the DB level. Or there may not be a (noticeable) delay with simultaneous request. Rather than each request build a DB connection (which could be costly in time and resources), the DB connection is shared. Ease of getting started early in a project is more valuable than figuring out highly-scalable things (which could have different solutions when the time comes.)
One last thing. The singleton is an instance of sequelize. Sequelize is a really popular NodeJS DB connection, and can even be configured with a connection pool. Now your single DB instance is managing requests via multiple connections which will take you to the next level scalability.

unwanted side effects if all users are accessing the same db connection
That's not how server apps usually work. Your users perform requests to an HTTP API, which then usually needs a single connection with a database throughout the request lifetime, fetching all kinds of information from all kinds of tables.
Even if you'd have a process that handles multiple users at the same time, that can all just happen over the same database connection.
Unless your app needs more than 1 DB connection, there's no real world adverse or positive consequences to using a singleton for it, besides the fact that it can look silly.

Related

How to send requests from React js to Node js to process something?

I am sorry for how I have framed the question in the title but I have started programming very recently so once again, I am really sorry.
I am developing a project with React js as my front-end and node js as my backend, I have been successful in doing some basic test api calls to confirm the connection between the two but now, how am I supposed to actually process different actions. For example, while a user is logging in, I need to first check if they are an existing user or not, sign them in if they are not, deleting a user account, changing username, etc.
I tried very hard to look for relevant articles but all I can find are basic "one-time" api calls, what am I supposed to do for an entire batch of operations? From what I have understood, the process of sending a request from React to getting it processed in Node js is like this:
react js ======>
(request for operation) node js ======>
(process the operation) ======>
send a response back to react
Please correct me if there are any mistakes in my question...
Thanks a lot!
This question is really broad, so I'm going to focus in on this part at a high level:
I tried very hard to look for relevant articles but all I can find are basic "one-time" api calls, what am I supposed to do for an entire batch of operations?
I'm guessing when you talk about requests and APIs you're talking about HTTP requests and REST APIs, which are fundamentally "stateless" kinds of things; HTTP requests are self-contained, and the backend doesn't have to keep track of any application state between requests in order to speak HTTP.
But backend applications usually do maintain state, often in the form of databases. For example, many REST APIs expose CRUD (create, read, update, and delete) operations for important business entities — in fact, Stack Overflow probably does exactly this for answers. When I post this answer, my browser will probably send some kind of a "create" request, and when I inevitably notice and fix a typo I will send some kind of "update" request. When you load it in your browser, you will send a "read" request which will get the text of the answer from Stack Overflow's database. Stack Overflow's backend keeps track of my answer between the requests, which makes it possible to send it to many users as part of many different requests.
Your frontend will have to do something similar if your project does things which involve multiple interdependent requests — for example, you would need to keep track of whether the user is authenticated or not in order to show them the login screen, and you would need to hold onto their authentication token to send along with subsequent requests.
Edit: Let's walk through a specific example. As a user, I want to be able to create and read notes. Each note should have a unique ID and a string for its text. On the backend, we'll have a set of CRUD endpoints:
var express = require(“express”);
var router = express.Router();
class Note {
constructor(id, text) {
this.id = id;
this.text = text;
}
}
// This is a quick and dirty database!
notes = {};
router.put("/note/:id", function(req, res) {
notes[req.params.id] = "hello"; // TODO: accept text as well
res.send("");
});
router.get(“/note/:id”, function(req, res) {
res.send(notes[req.params.id].text);
});
module.exports = router;
Then on the client side, one could create a note and then read it like this:
// this request will create the note
fetch("http://localhost:9000/note/42", { method: 'PUT', body: 'hello note!' });
// this request will read the note, but only after it is created!
fetch("http://localhost:9000/note/42")
.then(res => res.text())
.then(res => console.log(res));

Angular.js Handle data in frontend or backend only

Let's say I want to create a ToDo list using angular. I have a REST API that stores the items in db and provides basic operations. Now when I want to connect my angular app to the REST api I found two ways to do so following some tutorials online:
1.Data gets handled in backend only: A service gets created that has a getAllTodos function. This function gets directly attached to scope (e.g. to use it in ng-repeat):
var getAllTodos = function() {
//Todo: Cache http request
return $http...;
}
var addTodo = function(todo) {
//Todo: Clear cache of getAllTodos
$http...
}
2.Data gets handled in frontend too. There is a init function that initializes the todo variable in the service.
var todos = [];
var init = function() {
$http...
todos = //result of $http;
};
var getAllTodos = function() {
return todos;
};
var addTodo = function(todo) {
$http...
todos.push(todo);
}
I've seen both ways in several tutorials but I'm wondering what would be the best way? The first one is used in many tutorials where the author from the start has in mind to attach it to a REST API. The second one is often used when the author at first wants to create the functionality in the frontend and later wants to store data permanently using a backend.
Both ways have its advantages and disadvantages. The first one reduces code duplication in frontend and backend, the second one allows faster operations because it can be handled frontend first and the backend can be informed about changed afterwards.
//EDIT: Frontend is Angular.JS Client for me, backend the REST API on the server.
Separation of Frontend and Backend is often done for security reasons. You can locate Backend on a separate machine and then restrict access to that machine to only calls originating from the Frontend. The theory is that if the Frontend is compromised, the Backend has a lower risk factor. In reality if someone has compromised any machine on your network then the entire network is at risk on one level or another.
Another reason for a Backend/Frontend separation would be to provide database access through the Backend to multiple frontend clients. You have a single Backend with access to the DB and either multiple copies of the Frontend or different Frontends that access the Backend.
Your final design needs to take into account the possible security risks and also deployment and versioning. With the multiple-tier approach you can deploy individual Frontends without having to drop the Backend, and you can also "relocate" parts of the application without downtime. The more flexible the design of your application, the deployment may be more complicated. The needs of your application will depend on if you are writing a simple Blog or a large Enterprise application.
You need frontend and backend functionality. In frontend you preprape data which are being send and in the backend you make request to server.

Node JS live text update with CloudMQTT

I have a node server which is connecting to CloudMQTT and receiving messages in app.js. I have my client web app running on the same node server and want to display my messages received in app.js elsewhere in a .ejs file, I'm struggling as to how best to do this.
app.js
// Create a MQTT Client
var mqtt = require('mqtt');
// Create a client connection to CloudMQTT for live data
var client = mqtt.connect('xxxxxxxxxxx', {
username: 'xxxxx',
password: 'xxxxxxx'
});
client.on('connect', function() { // When connected
console.log("Connected to CloudMQTT");
// Subscribe to the temperature
client.subscribe('Motion', function() {
// When a message arrives, do something with it
client.on('message', function(topic, message, packet) {
// ** Need to pass message out **
});
});
});
Basically you need a way for the client (browser code with EJS - HTML, CSS and JS) to receive live updates. There are basically two ways to do this from the client to the node service:
A websocket session instantiated by the client.
A polling approach.
What's the difference?
Under the hood, a websocket is full-duplex communication mechanism. That means that you can open a socket from the client (browser) to the node server and they can talk to each other both ways over a long-lived session. The pro is that updates are often times instantaneous without having to incur the cost of making another HTTP request as in the polling case. The con is that it uses a socket connection that may be long-lived, and there is typically a socket pool on any server that has limited ability to deal with many sockets. There are ways to scale around this issue, but if it's a big concern for you, you may want to go with polling.
Polling is where you set up an endpoint on your server that the client JS code hits every now and then. That endpoint will return you the updated information. The con is that you are now making a new request in order to get updates, which may not be desirable if a lot of updates are expected to come through and the app is expected to be updated in the timeliest manner possible (most of the time polling is sufficient though). The pro is that you do not have a live connection open on the server indefinitely.
Again, there are many more pros and cons, these are just the obvious ones. You decide how to implement it. When the client receives the data from either of these mechanisms, you may update the UI in any suitable manner.
From the server end, you will need a way to persist the information coming from CloudMQTT. There are multiple ways to do this. If you do not care about memory consumption and are ok with potentially throwing away old data if a client does not ask for it for a while, then it may be ok to just store this in memory in a regular javascript object {}. If you do care about persisting the data between server restarts/crashes (probably best), then you can persist to something like Redis, Mongo, any of the SQL stores if your data is relational in nature, or even a regular JSON file on disk (see fs.writeFile).
Hope this helped give you a step in the right direction!

Node.js Programming Pattern for getting Execution Context

I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.

Node.js - Is this a good structure for frequently updated content?

As a follow-up to my question yesterday about Node.js and communicating with clients, I'm trying to understand how the following would work.
Case:
So, I have this website where content is updated very frequently. Let's assume this time, this content is a list of locations with temperatures. (yes, a weather service)
Now, every time a client checks for a certain location, he or she goes to a url like this: example.com/location/id where id corresponds to the id of the location in my database.
Implementation:
At the server, checktemps.js loops (every other second or so) through all the locations in my (mySQL) database and checks for the corresponding temperature. It then stores this data is an array within checktemps.js. Because temperatures can change all the time, it's important to keep checking for updates in the database.
When a request to example.com/location/id is made, checktemps.js looks into the array with a record with id = id. Then, it responds with the corresponding temperature.
Question:
Plain text, html or an ajax call is not relevant at the moment. I'm just curious if I have this right? Node.js is a rather unusual thing to get a grasp on, so I try to figure out if this is logical?
At the server, checktemps.js loops
(every other second or so) through all
the locations in my (mySQL) database
and checks for the corresponding
temperature. It then stores this data
is an array within checktemps.js
This is extremely inefficient. You should not be doing looping(every other second or so).
Modules
Below I would try and make a list of the modules(both node.js modules as other modules) I would use to do this efficient:
npm is a package manager for node. You can use it to install and publish your node
programs. It manages dependencies and does other cool stuff.
I hope sincerely that you already know about npm, if not i recommend you to learn about it as soon as possible. In the beginning you just need to learn how to install packages, but that is very easy. You just need to type npm install <package-name>. Later I would really like to advice you to learn to write your own packages to manage the dependencies for you.
Express is a High performance, high class web development for Node.js.
This sinatra-style framework from TJ is really sweet and you should read the documentation/screencasts available to learn it's power.
Socket.IO aims to make realtime apps possible in every browser and
mobile device, blurring the
differences between the different
transport mechanisms.
Redis is an open source, advanced
key-value store. It is often referred
to as a data structure server since
keys can contain strings, hashes,
lists, sets and sorted sets.
Like Raynos said this extremely fast/sexy database has pubsub semantics, which are needed to do your question efficiently. I think you should really play with this database(tutorial) to appreciate it's raw power. Installing is easy as pie: make install
Node_redis is a complete Redis client for node.js. It supports all Redis commands, including MULTI, WATCH, and PUBLISH/SUBSCRIBE.
Prototype
I just remembered I helped another user out in the past with a question about pubsub. I think when you look at that answer you will have a better understanding how to do it correctly. The code has been posted a while back and should be updated(minor changes in express) to:
var PORT = 3000,
HOST = 'localhost',
express = require('express'),
io = require('socket.io'),
redis = require('redis'),
app = module.exports = express.createServer(),
socket = null;
app.use(express.static(__dirname + '/public'));
if (!module.parent) {
app.listen(PORT, HOST);
console.log("Express server listening on port %d", app.address().port)
socket = io.listen(app);
socket.on('connection', function(client) {
var subscribe = redis.createClient();
subscribe.subscribe('pubsub'); // listen to messages from channel pubsub
subscribe.on("message", function(channel, message) {
client.send(message);
});
client.on('message', function(msg) {
});
client.on('disconnect', function() {
subscribe.quit();
});
});
}
I have compressed the updated code with all the dependencies inside, but you while still need to start redis first.
Questions
I hope this gives you an idea how to do this.
With node.js you could do it even better. The request/response thingy is manifested in our heads since the beginning of the web. But you can do just one ajax request if you open the website/app and never end this call. Now node.js can send data whenever you have updates to the client. Search on youtube for Introduction to Node.js with Ryan Dahl (the creator of node.js), there he explains it. Then you have realtime updates without having to do requests by the client all the time.

Categories