Handling data existence validation in ExpressJS service layer or controller - javascript

I'm working on a medium-sized website and I figured that I need to write more maintainable code with better project structure.
I found this article and a few others describing basically the same idea about 3-layer architecture
I find it great as I wasn't using the service layer before and It helped to DRY the code
but the article doesn't include anything about validation and how it should be handled. especially when validation is done against database (like checking resource existence)
Should I do the validation in the service layer or write validation middleware (which will need to access database and I think this is against the pattern described)
for example, I ended up with two API endpoints where website users update and remove their already added DeliveryAdddress. As you can see below validation in the service layer led to routes having duplicated code to handling HTTP response.
my routes files
router.put('/delivery-addresse/:id', DeliveryAddressesController.update);
router.delete('/delivery-addresses/:id', DeliveryAddressesController.remove);
DeliveryAddressesController
async update(req, res){
try {
....
await AddressesService.updateDeliveryAddress(userId, address);
....
} catch(error){
if (error instanceof ValidationError){
if (error.name === 'NO_SUCH_ADDRESS'){
return res.status(404).json(error);
}
}
....
}
},
async remove(req, res){
try {
...
await AddressesService.removeDeliveryAddress(userId, addressId);
....
} catch(error){
if (error instanceof ValidationError){
if (error.name === 'NO_SUCH_ADDRESS'){
return res.status(404).json(error);
}
}
....
}
},
Options
I could think of these options but not sure if they're good and which one.
Validation middleware before the controller which (the middleware) itself will call the
validation method in the service.
I think it's a good option but, maybe I
will end up with multiple database calls to fetch the resource if I don't store the result
in the req object
Using a function in the catch block to check for any ValidationErrors and respond but it's not a great readable way.

The first question you need to ask is what credentials I should write for ... The middleware layer is between the two layers of operation and application, and it is very important what kind of validation you want to do ... For example, if you want two communication services on a network, Establish a key for it and you need a validator to check every key in each service. In this case, it is better to send this key in the header and validate it in middleware, but sometimes you want to evaluate information from other services or even from the database, which requires several validation steps. It is better to implement them all in your services layer! I suggest option 2 for this 😉

Related

Ajax GET: multiple data-specific calls, or fewer less specific calls?

I'm developing a web app using a Node.js/express backend and MongoDB as a database.
The below example is for an admin dashboard page where I will display cards with different information relating to the users on the site. I might want to show - on the sample page - for example:
The number of each type of user
The most common location for each user type
How many signups there are by month
Most popular job titles
I could do this all in one route, where I have a controller that performs all of these tasks, and bundles them as an object to a url that I can then pull data from using ajax. Or, I could split each task into its own route/controller, with a separate ajax call to each. What I'm trying to decide is what are the best practices around making multiple ajax calls on a single page.
Example:
I am building up a page where I will make an interactive table using DataTables for different types of user ( currently have two: mentors and mentees). This example requires just two data requests (one for each user type), but my final page will be more like 10.
For each user type, I am making an ajax get call for each user type, and building the table from the returned data:
User type 1 - Mentees
$.get('/admin/' + id + '/mentees')
.done(data => {
$('#menteeTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
User type 2 - Mentors
$.get('/admin/' + id + '/mentors')
.done(data => {
$('#mentorTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "position"}
]
});
})
This then requires two routes in my Node.js backend:
router.get("/admin/:id/mentors", getMentors);
router.get("/admin/:id/mentees", getMentees);
And two controllers, that are structured identically (but filter for differnt user types):
getMentees(req, res, next){
console.log("Controller: getMentees");
let query = { accountType: 'mentee', isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
This works great. However, as I need to make multiple data requests I want to make sure that I'm building this the right way. I can see several options:
Make individual ajax calls for each data type, and do any heavy lifting on the backend (e.g. tally user types and return) - as above
Make individual ajax calls for each data type, but do the heavy lifting on the frontend. In the above example I could have just as easily filtered out isAdmin users on the data returned from my ajax call
Make fewer ajax calls that request less refined data. In the above example I could have made one call (requiring only one route/controller) for all users, and then filtered data on the frontend to build two tables
I would love some advice on which strategy is most efficient in terms of time spent sourcing data
UPDATE
To clarify the question, I could have achieved the same result as above using a controller setup something like this:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
And then make one ajax call, and split the data accordingly. My question is: which is the preferred option?
TL;DR: Option 1
IMO I wouldn't serve unprocessed data to the front-end, things can go wrong, you can reveal too much, it could take a lot for the unspecified client machine to process (could be a low power device with limited bandwidth and battery power for example), you want a smooth user experience, and javascript on the client churning out information from a mass of data would detract from that. I use the back-end for the processing (prepare the information how you need it), JS for retrieving and placing the information (AJAX) on the page and things like switching element states, and CSS for anything moving around (animations and transitions etc) as much as possible before resorting to JS.
Also for the routes, my approach would be each distinct package of information (dataTable) has a route, so you're not overloading a method with too many purposes, keep it simple and maintainable. You can always abstract away anything that's identical and repeated often.
So to answer your question, I'd go with Option 1.
You could also offer a single 'page-load' endpoint, then if anything changes update the individual tables later using their distinct endpoints. This initial 'page-load' call could collate the information from the endpoints on the backend and serve as one package of data to populate all tables initially. One initial request with one lot of well-defined data, then the ability to update an individual table if the user requests it (or there is a push if you get into that).
It is really good question. First of all you should realize how your application will manage with received data. If it is huge amount of data that are not changed on fronend but with different views and whole data needs for these views it might be cached into frontend (like user settings data - application always reads it but rare changes) then you could follow with your second options. Other case if frontend works only with small part of huge amount of database data (like log data for specific user) it is preferably to preprocess (filtering) on server side your first and third options. Actually second options is preferable only for caching unchanged data on frontend as for me.
After clarifying the question you could use grouping for your request and lodash library:
Profile.find(query)
.lean()
.then(users => {
let result = [];
result = _(users)
.groupBy((elem) => elem.accountType)
.map((vals, key) => ({accountType: key, users: vals}))
.value();
});
return res.json(result);
});
Certainly you could map your data as you comfortable. This way allows to get all types of accounts (not only 'mentee' and 'mentor')
Usually there are 3 things in such architectures:
1. Client
2. API Gateway
3. Micro services (Servers)
In your case :
1. Client is JS application code
2. API Gateway + Server is Nodejs/express (Dual responsibility)
Point 1 to be noted
Servers only provides core APIs. So this API for a server should be only a user api like:
/users?type={mentor/mentee/*}&limit=10&pageNo=8
i.e anyone can ask for all data or filtered data using type query string.
Point 2 to be noted
Since Web pages are composed of multiple data points and making call for every data point to the same server increases the round trip and makes the UX worse, API gateways are there. So in this case JS would not directly communicate with core server, it communicates with API Gateway with and APIs like:
/home
The above API internally calls below APIs and aggregates the data in a single json with mentor and mentee list
/users?type={mentor/mentee/*}&limit=10&pageNo=8
This API simply passes the call to core server with query attributes
Now since in your code, API gateway and Core server is merged into single layer, this is how you should setup your code:
getHome(req, res, next){
console.log("Controller: /home");
let queryMentees = { accountType: 'mentee', isAdmin: false };
let queryMentors = { accountType: 'mentor', isAdmin: true };
mentes = getProfileData(queryMentees);
mentors = getProfileData(queryMentors);
return res.json({mentes,mentors});
}
getUsers(req, res, next){
console.log("Controller: /users");
let query = {accountType:request.query.type,isAdmin:request.query.isAdmin};
return res.json(getProfileData(query));
}
And a common ProfileService.js class with a function like:
getProfileData(query){
Profile.find(query)
.lean()
.then(users => {
return users;
})
.catch(err => {
console.log(err)
})
}
More info about API Gateway Pattern here
If you can't estimate how many types need on your app then needs to be use parameters,
If I wrote like this application I don't write multiple function for calling ajax and don't write multiple route and controller,
Client side like this
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
let dataTable = null;
switch(userType){
case "mentee":
dataTable = $('#menteeTable');
break;
case "mentor":
dataTable = $('#mentorTable');
break;
//.. you can add more selector for datatables but I wouldn't prefer this way you can generate "columns" property on server like "data" so meaning that you can just use one datatable object on client side
}
dataTable.DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
}
My prefer for client side
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
$('#dataTable').DataTable( {
data: data.rows,
"columns": data.columns
]
});
})
}
Server response should support {data: [{}...], columns:[{}....]} like this on this scenario Datatables examples
Server side like this
Router just one
router.get("/admin/:id/userType/:userType", getQueryFromDB);
Controller
getQueryFromDB(req, res, next){
let query = { accountType: req.params.userType, isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
So main meaning about your question for me that mentees, mentors etc... are parameters like as "id"
make sure that your authentication checked which users have access userType data for both code samples mine and your code, someone can reach your data with just change routing
Have a nice weekend
from performance and smoothness of ui on user device:
Sure it would be better to do 1 ajax request for all core data (which is important to show as soon as possible), and possibly perform more requests for less priority data with some tiny delay. Or do 2 requests: one for 'fast' data and another for 'slow' (if this is applicable) because:
On one hand, many ajax requests could slowdown ui there could be a limitation for amount of ajax requests getting done at same time (it is browser dependent an could be from 2 to 10) so if for ex. in ie there will be limit of 2 then with 10 ajaxes there will be an queue of waiting ajax requests
But on the other hand if there is much data to show or some data takes longer to prepare it could result in long waiting for backend response to show something.
Talking of heavy lifting: It is not good to make such things on UI side anyway, because:
User device can be not good with resources and 'slow'.
Javascript is synchronous and as a consequence, any long loop 'freeze' UI for time it required to run that loop.
Talking of filtering users:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
seems to have one problem, possibly query will have sortings and limits, if so final result will be inconsistent, it possibly end up with only mentees or only mentors, i think you should do 2 separate queries to data storage anyways
from project structuring, maintainability, flexibility, reusability, and so on, of course it is good to decouple things as much as possible.
So, finally, imagine you made:
1. many microservices like for each widget 1 backend microcervice but there is a layer which allows to aggregate results to optimize traffic from UI in 1-2 ajax query.
2. many ui modules each working with own data, received from some service, which do 1-2 calls for aggregating backend and distributes different datasets it recieved to many frontend modules.
At back end just make one dynamic parametric method API. you can pass mentor, mentee,admin etc as role.you should have some type of user authentication and authorization to check if user a can see users in role B or not.
Regarding UI its up to user they want one page with drop-down filter or they want URLs to bookmark.
Like multiple url /admin /mentor etc.
or one url with querystring and dropdown./user?role=mentor,/user?role=admin.
Based on url you have to make controllers. I generally prefer drop down and fetch data (by default all mentors might be the selection).
This is a specific invitation suited for invitations of a romantic nature (e.g. dates or engagement parties).

How to deny access for fetch operations

I'm trying to create an access control system at the document level in Meteor, and I seem to be missing how to prevent users from fetching documents.
I've read the documentation around collection.allow and collection.deny. Via these objects we can control "who" can update, remove and insert. The problem is that Meteor doesn't seem to supply similar functionality for fetch operations. What is proper way to deny unauthorized users from reading documents?
Extra requirement: this needs to happen server side so we don't leak documents to unauthorized users over the network.
There is no way to deny reads to collection data once it has arrived on the client. In theory, there's no way to actually enforce anything on the client because the user could modify the code. You can, however, enforce which documents get published.
A publish function can handle authorization rules of arbirtary complexity. Here's a simple example where we want to publish documents from the Messages collection only to users who are members of the given group:
Meteor.publish('messagesForGroup', function(groupId) {
check(groupId, String);
var group = Groups.findOne(groupId);
// make sure we have a valid group
if (!group)
throw new Meteor.Error(404, 'Group not found');
// make sure the user is a member
if (!_.contains(group.members, this.userId))
throw new Meteor.Error(403, 'You are not a member of the group');
return Messages.find({groupId: groupId});
});
1) Remove autopublish package from your app.
2) Create your own publish and deny access to unauthorized users
Meteor.publish("userData", function () {
if (this.userId) { // Check user authorized
return MyCollection.find(); // Share data
} else {
this.ready(); // Share nothing
}
});

Parse iOS SDK: Understanding Cloud Code

Scenario = I am slowly but surely wrapping my head around what is going on with Parse's cloud code features. I just need some help from those who would like to answer some short, relatively simple questions about what is going on in some sample cloud code functions.
The code I will use in this example is below
1) cloud code
Parse.Cloud.define('editUser', function(request, response) {
var userId = request.params.userId,
newColText = request.params.newColText;
var User = Parse.Object.extend('_User'),
user = new User({ objectId: userId });
user.set('new_col', newColText);
Parse.Cloud.useMasterKey();
user.save().then(function(user) {
response.success(user);
}, function(error) {
response.error(error)
});
});
2) called from iOS
[PFCloud callFunction:#"editUser" withParameters:#{
#"userId": #"someuseridhere",
#"newColText": #"new text!"
}];
This code was taken from here
Question 1 =
(request, response)
I am confused by what this is. Is this like typecasting in iOS where I am saying (in the iOS call) I want to pass an NSString into this function ("userId") and inside the cloud code function I'm going to call it "request"? Is that what's going on here?
Question 2 =
Parse.Object.extend('_User')
Is this grabbing the "User" class from the Parse database so that a "PFObject" of sorts can update it by creating a new "user" in the line below it?
Is this like a...
PFObject *userObject = [PFObject objectWithClassName:#"User"]?
Question 3 =
user.set('new_col', newColText)
This obviously 'sets' the values to be saved to the PFUser (~I think). I know that the "newColText" variable is the text that is to be set - but what is 'new_col'? Only thing I can think of is that this sets the name of a new column in the database of whatever type is being passed through the "request"?
Is this like a...
[[PFUser currentUser] setObject: forKey:]
Question 4 =
Parse.Cloud.useMasterKey()
Without getting too technical, is this basically all I have to type before I can edit a "User" object from another User?
Question 5 =
user.save().then(function(user) {
response.success(user);
}
Is this like a...
[user saveInBackgroundWithBlock:]?
and if so, is
function(error) {
response.error(error)
just setting what happens if there is an error in the saveInBackgroundWithBlock?
Please keep in mind, I know iOS - not JavaScript. So try to be as descriptive as possible to someone who understands the Apple realm.
Here's my take on your questions:
The request parameter is for you to access everything that is part of the request/call to your cloud function, it includes the parameters passed (request.params), the User that is authenticated on the client (request.user) and some other things you can learn about in the documentation. The response is for you to send information back to the calling code, you generally call response.success() or response.error() with an optional string/object/etc that gets included in the response, again documentation here.
That's a way of creating an instance of a User, which because it is a special internal class is named _User instead, same with _Role and _Installation. It is creating an instance of the user with an ID, not creating a new one (which wouldn't have an ID until saved). When you create an object this way you can "patch" it by just changing the properties you want updated.
Again, look at the documentation or an example, the first parameter is the column name (it will be created if it doesn't exist), the second value is what you want that column set to.
You have to do Parse.Cloud.useMasterKey() when you need to do something that the user logged into the client doesn't have permission to do. It means "ignore all security, I know what I'm doing".
You're seeing a promise chain, each step in the chain allows you to pass in a "success" handler and an optional "error" handler. There is some great documentation. It is super handy when you want to do a couple of things in order, e.g.
Sample code:
var post = new Parse.Object('Post');
var comment = new Parse.Object('Comment');
// assume we set a bunch of properties on the post and comment here
post.save().then(function() {
// we know the post is saved, so now we can reference it from our comment
comment.set('post', post);
// return the comment save promise, so we can keep chaining
return comment.save();
}).then(function() {
// success!
response.success();
}, function(error) {
// uh oh!
// this catches errors anywhere in the chain
response.error(error);
});
I'm pretty much at the same place as you are, but here are my thoughts:
No, these are the parameters received by the function. When something calls the editUser cloud function, you'll have those two objects to use: request & response. The request is basically what the iOS device sent to the server, and response is what the server will send to the iOS device.
Not quite that. It's like creating a subclass of _User.
Think of Parse objects types as a database table and it's instances as rows. The set will set (derp) the value of 'newColText' to the attribute/column 'new_col'.
Not sure, never used that function as I don't handle User objects. But might be that.
Pretty much that. But it's more sort of like (pseudo-code, mixing JS with Obj-C):
[user saveInBackgroundWithBlock:^(BOOL succeeded, NSError *error){
if(error){
response.error(error); // mark the function as failed and return the error object to the iOS device
}
else{
response.success(user); // mark the function call as successful and return the user object to the iOS device
}
}];

Mongoose difference between .save() and using update()

To modify a field in an existing entry in mongoose, what is the difference between using
model = new Model([...])
model.field = 'new value';
model.save();
and this
Model.update({[...]}, {$set: {field: 'new value'});
The reason I'm asking this question is because of someone's suggestion to an issue I posted yesterday: NodeJS and Mongo - Unexpected behaviors when multiple users send requests simultaneously. The person suggested to use update instead of save, and I'm not yet completely sure why it would make a difference.
Thanks!
Two concepts first. Your application is the Client, Mongodb is the Server.
The main difference is that with .save() you already have an object in your client side code or had to retrieve the data from the server before you are writing it back, and you are writing back the whole thing.
On the other hand .update() does not require the data to be loaded to the client from the server. All of the interaction happens server side without retrieving to the client.So .update() can be very efficient in this way when you are adding content to existing documents.
In addition, there is the multi parameter to .update() that allows the actions to be performed on more than one document that matches the query condition.
There are some things in convenience methods that you lose when using .update() as a call, but the benefits for certain operations is the "trade-off" you have to bear. For more information on this, and the options available, see the documentation.
In short .save() is a client side interface, .update() is server side.
Some differences:
As noted elsewhere, update is more efficient than find followed by save because it avoids loading the whole document.
A Mongoose update translates into a MongoDB update but a Mongoose save is converted into either a MongoDB insert (for a new document) or an update.
It's important to note that on save, Mongoose internally diffs the document and only sends the fields that have actually changed. This is good for atomicity.
By default validation is not run on update but it can be enabled.
The middleware API (pre and post hooks) is different.
There is a useful feature on Mongoose called Middleware. There are 'pre' and 'post' middleware. The middlewares get executed when you do a 'save', but not during 'update'. For example, if you want to hash a password in the User schema everytime the password is modified, you can use the pre to do it as follows. Another useful example is to set the lastModified for each document. The documentation can be found at http://mongoosejs.com/docs/middleware.html
UserSchema.pre('save', function(next) {
var user = this;
// only hash the password if it has been modified (or is new)
if (!user.isModified('password')) {
console.log('password not modified');
return next();
}
console.log('password modified');
// generate a salt
bcrypt.genSalt(10, function(err, salt) {
if (err) {
return next(err);
}
// hash the password along with our new salt
bcrypt.hash(user.password, salt, function(err, hash) {
if (err) {
return next(err);
}
// override the cleartext password with the hashed one
user.password = hash;
next();
});
});
});
One detail that should not be taken lightly: concurrency
As previously mentioned, when doing a doc.save(), you have to load a document into memory first, then modify it, and finally, doc.save() the changes to the MongoDB server.
The issue arises when a document is edited that way concurrently:
Person A loads the document (v1)
Person B loads the document (v1)
Person B saves changes to the document (it is now v2)
Person A saves changes to an outdated (v1) document
Person A will see Mongoose throw a VersionError because the document has changed since last loaded from the collection
Concurrency is not an issue when doing atomic operations like Model.updateOne(), because the operation is done entirely in the MongoDB server, which performs a certain degree of concurrency control.
Therefore, beware!

Struggling to build a JS/PHP validation function for my app

I have a web service that returns a JSON object when the web service is queried and a match is found, an example of a successful return is below:
{"terms":[{"term":{"termName":"Focus Puller","definition":"A focus puller or 1st assistant camera..."}}]}
If the query does not produce a match it returns:
Errant query: SELECT termName, definition FROM terms WHERE termID = xxx
Now, when I access this through my Win 8 Metro app I parson the JSON notation object using the following code to get a JS object:
var searchTerm = JSON.parse(Result.responseText)
I then have code that processes searchTerm and binds the returned values to the app page control. If I enter in a successful query that finds match in the DB everything works great.
What I can't work out is a way of validating a bad query. I want to test the value that is returned by var searchTerm = JSON.parse(Result.responseText) and continue doing what I'm doing now if it is a successful result, but then handle the result differently on failure. What check should I make to test this? I am happy to implement additional validation either in my app or in the web service, any advice is appreciated.
Thanks!
There are a couple of different ways to approach this.
One approach would be to utilize the HTTP response headers to relay information about the query (i.e. HTTP 200 status for a found record, 404 for a record that is not found, 400 for a bad request, etc.). You could then inspect the response code to determine what you need to do. The pro of this approach is that this would not require any change to the response message format. The con might be that you then have to modify the headers being returned. This is more typical of the approach used with true RESTful services.
Another approach might be to return success/error messaging as part of the structured JSON response. Such that your JSON might look like:
{
"result":"found",
"message":
{
"terms":[{"term":{"termName":"Focus Puller","definition":"A focus puller or 1st assistant camera..."}}]}
}
}
You could obviously change the value of result in the data to return an error and place the error message in message.
The pros here is that you don't have to worry about header modification, and that your returned data would always be parse-able via JSON.parse(). The con is that now you have extra verbosity in your response messaging.

Categories