I'm unsure of how to apply the MVC architecture to my node web app, in specific separating the model from the controller.
The current structure has the views separated properly (all my .ejs files) and then I guess that my app.js is like the controller as it contains all the routes to respond to http requests. The routes are then handled in a queries.js file which is where I query my database using node-postgres (most of the views rendered are either displaying information from the database or displaying forms so the user can insert data to the database). Should I be doing this differently, or more specifically should I try to create a model that contains raw data from the database and have the route handlers manipulate this data instead of directly querying the database (not sure how I would handle inserting into the database then...)? I'm just concerned that the current way I have structured my web app will make it difficult to manage as it grows and difficult for other to understand and add on to.
Here is an example of how the web app is currently structured: Say a user clicks a button to display all the active orders, my app.js file would look something like this
const db = require('./queries')
app.get('/activeorders', db.getActiveOrders)
My queries.js file would then handle this route like so:
const Pool = require('pg').Pool
const pool = new Pool({
user: process.env.USER,
host: process.env.HOST,
database: process.env.DB,
password: process.env.PASSWORD,
port: process.env.PORT,
})
const getActiveOrders = (request, response) => {
const queryOrders = 'SELECT * FROM orders WHERE complete=0 ORDER BY date_rec DESC;';
pool.query(queryOrders, (error, results) => {
if(error) {
throw error
}
var ordersObj = results.rows;
response.render('pages/activeorders', {
ordersObj: ordersObj
})
})
}
module.exports = {
getActiveOrders,
}
As can be seen in the handler the database is queried and stores the results in an object that is then passed to the activeorders.ejs file when rendered.
Maybe using a middleware allowing you to make an API from a Postgresql database.
Any change made on the databases will be propagated and available to you API. It will also let a lot of modularity to possible filters/ select ...
And for complex queries DBA will be able to develop stored procs available to everyone by API. That is the Model layer. Only SQL exposed through a middleware.
Controller will be how you made you API calls and data transformations if needed to refresh data based in you View
Abstracting the model layer with a middleware definitely a huge economy on time.
For postgres majors options : postgres & pgrest
In MVC pattern we split each part of the application because this approach to software development for complex needs by connecting the implementation to an evolving model
I suggest you visit sample-mvc-express-postgres to understand structure code
If you are interested in design pattern, visit https://dev.to/salah856/implementing-domain-driven-design-part-i-5a72, This link explains how to use DDD in your application and show how to design each layer of the application
Set app.js as root file and create a controller and a model folder. Define all models in model folder, while in controller first import model files handlers and create all APIs or queries function. Then in the root app.js file import controller functions handler and run it.
Related
I have a client who has 130k books (~4 terabyte) and he wants a site to upload them into it and make an online library. So, how can I make it possible for him to upload them automatically or at least upload multiple books per time? I'll be using Node.js + mysql
I might suggest using Object Storage on top of MySQL to speed up book indexing and retrieval- but that is entirely up to you.
HTTP is a streaming protocol, and node, interestingly, has streams.
This means that, when you send a HTTP request to a node server, internally, node handles it as a stream. This means, theoretically, you can upload massive books to your web server while only having a fraction of it in memory at a time.
The first thing is- book can be very large. In order to efficiently process them, we must process the metadata (name, author, etc) and content seperately.
One example, using an express-like framework, could be: pseudo-code
app.post('/begin/:bookid', (Req, Res) => {
ParseJSON(Req)
MySQL.addColumn(Req.params.bookid,Req.body.name,Req.body.author)
})
app.put('/upload/:bookid', (Req, Res) => {
// If we use MySQL to store the books:
MySQL.addColumn(Req.params.bookid,Req.body)
// Or if we use object storage:
let Uploader = new StorageUploader(Req.params.bookid)
Req.pipe(Uploader)
})
If you need inspiration, look at how WeTransfer has created their API. They deal with lots of data daily- their solution might be helpful to you.
Remember- your client likely won't want to use Postman to upload their books. Build a simple website for them in Svelte or React.
I have build a Todo App with create-react-app. The store I'm using is based on Local Storage(JS attribute of object window). Now I created a MySQL databases and want to connect to that database, so the state will show the values from database, and will be updated through actions.
I've tried to connect to db and output values through 'node' console using db.js. It works.
const mysql = require('mysql');
const con = mysql.createConnection({
host: "localhost",
user: "root",
password: "root",
database: 'root'
});
con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM tasks", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
});
Is it possible to connect the state of app to database using this script?
You can't connect them directly.
JavaScript running in a web browser cannot speak the MySQL protocol (nor can it make raw network connections that would be needed to write an implementation in JS).
Instead, create a web service (in the programming language of your choice, which could be JavaScript running on Node.js (e.g. the code you have already + Express.js + some glue)) and use Ajax to communicate with it.
The general solution for a question like this is the following framework:
Back-end (Node.js, Express, Database connection including authorization)
Front-end (React(, Redux to manage state))
If you then launch the React app, it should populate its state based on data retrieved from the database, which is a process to which you can add authorization (make retrievable data depend on the role/status of the user).
In the back-end you can define functions that take in a certain subset of parameters, which performs database actions, to which you can add business rules for your application. The React app then just sends HTTP requests to the Express server, which handles everything that needs verification and authorization before even touching the data.
If you search the internet for any configuration of a fullstack architecture using React and MySQL, you'll find similar results to what I mentioned.
I have done some Angular 2 for the first time and after the html and css I now am stuck at the DB connection.
Here is what I am working with:
app.component.ts (Loads the templateURL)
overview.component.html (Is the template, 90% html 10% Modal)
db_basic.db (The DB File)
sql.js (This can get Values of the db if you do node sql.js)
Here the sql.js file to show how I can connect to the DB.
var sqlite3 = require('sqlite3').verbose();
var db = new sqlite3.Database('../db/db_basic.db');
var check;
db.serialize(function() {
db.each("SELECT * FROM server", function(err, row) {
console.log(row.name);
});
});
db.close();
Now, my Question would be, how do I connect to the DB and use those values in the HTML?
Extra:
You need to make a rest API.
The rest API seperates frontend (angular) from backend (database),
it serves you data and it can take care of security.
You can use a framework such as express.js to make a rest API in node.
Express.js can also be used to serve your static files (angular project)
(so you do not need appache or nginx).
Let's say I want to create a ToDo list using angular. I have a REST API that stores the items in db and provides basic operations. Now when I want to connect my angular app to the REST api I found two ways to do so following some tutorials online:
1.Data gets handled in backend only: A service gets created that has a getAllTodos function. This function gets directly attached to scope (e.g. to use it in ng-repeat):
var getAllTodos = function() {
//Todo: Cache http request
return $http...;
}
var addTodo = function(todo) {
//Todo: Clear cache of getAllTodos
$http...
}
2.Data gets handled in frontend too. There is a init function that initializes the todo variable in the service.
var todos = [];
var init = function() {
$http...
todos = //result of $http;
};
var getAllTodos = function() {
return todos;
};
var addTodo = function(todo) {
$http...
todos.push(todo);
}
I've seen both ways in several tutorials but I'm wondering what would be the best way? The first one is used in many tutorials where the author from the start has in mind to attach it to a REST API. The second one is often used when the author at first wants to create the functionality in the frontend and later wants to store data permanently using a backend.
Both ways have its advantages and disadvantages. The first one reduces code duplication in frontend and backend, the second one allows faster operations because it can be handled frontend first and the backend can be informed about changed afterwards.
//EDIT: Frontend is Angular.JS Client for me, backend the REST API on the server.
Separation of Frontend and Backend is often done for security reasons. You can locate Backend on a separate machine and then restrict access to that machine to only calls originating from the Frontend. The theory is that if the Frontend is compromised, the Backend has a lower risk factor. In reality if someone has compromised any machine on your network then the entire network is at risk on one level or another.
Another reason for a Backend/Frontend separation would be to provide database access through the Backend to multiple frontend clients. You have a single Backend with access to the DB and either multiple copies of the Frontend or different Frontends that access the Backend.
Your final design needs to take into account the possible security risks and also deployment and versioning. With the multiple-tier approach you can deploy individual Frontends without having to drop the Backend, and you can also "relocate" parts of the application without downtime. The more flexible the design of your application, the deployment may be more complicated. The needs of your application will depend on if you are writing a simple Blog or a large Enterprise application.
You need frontend and backend functionality. In frontend you preprape data which are being send and in the backend you make request to server.
When you have a RESTful server which only responds with JSON by fetching some information from the database, and then you have a client-side application, such as Backbone, Ember or Angular, from which side do you test an application?
Do I need two tests - one set for back-end testing and another set for front-end testing?
The reason I ask is testing REST API by itself is kind of difficult. Consider this code example (using Mocha, Supertest, Express):
var request = require('supertest');
var should = require('chai').should();
var app = require('../app');
describe('GET /api/v1/people/:id', function() {
it('should respond with a single person instance', function(done) {
request(app)
.get('/api/v1/people/:id')
.expect(200)
.end(function(err, res) {
var json = res.body;
json.should.have.property('name');
done();
});
});
});
Notice that :id in the url? That's an ObjectId of a specific person. How do I know what to pass there? I haven't even looked into the database at this point. Does that I mean I need to import Person model, connect to database and do queries from within the tests? Maybe I should just move my entire app.js into tests? (sarcasm :P). That's a lot of coupling. Dependency on mongoose alone means I need to have MongoDB running locally in order to run this test. I looked into sinon.js, but I am not sure if it's applicable here. There weren't many examples on how to stub mongoose.
I am just curious how do people test these kinds of applications?
Have you tried using mongoose-model-stub in your server-side test? It will free you from having to remember or hardcode database info for your tests.
As for testing the client side, your "webapp" is basically two apps: a server API and a client-side frontend. You want tests for both ideally. You already know how to test your server. On the client you would test your methods using stubbed out "responses" (basically fake json strings that look like what your web service spits out) from your API. These don't have to be live urls; rather it's probably best if they're just static files that you can edit as needed.
I would use nock..https://github.com/pgte/nock
What you want to test is the code you have written for your route.
So what you do is, create a response that will be sent when the end point is hit.
Basically its a fake server..
Something like this..
Your actual method..
request({
method: "GET",
url: "http://sampleserver.com/account"
}, function(err, res, data){
if (err) {
done(err);
} else {
return done(null,data);
}
});
Then..
var nockObj = nock("http://sampleserver.com")
.get("/account")
.reply(200,mockData.arrayOfObjects);
//your assertions here..
This way you don't alter the functionality of your code.. Its like saying.. instead of hitting the live server..hit this fake server and get mock data. All you have to do is make sure your mock data is in sync with the expected data..