What is the difference of resolver/service in nestJS using graphql? - javascript

I do not understand the difference between a resolver and a service in a nestJS application using graphQl and mongoDB.
I found examples like this, where the resolver just calls a service, so the resolver functions are always small as they just call a service function. But with this usage I don't understand the purpose of the resolver at all...
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
return this.taskService.deleteTask(id);
}
}
#Injectable()
export class TasksService {
deleteTask(id: string) {
// Define collection, get some data for any checking and then update dataset
const Tasks = this.db.collection('tasks')
const data = await Task.findOne({ _ id: id })
let res
if (data.checkSomething) res = Task.updateOne({ _id: id }, { $set: { delete: true } })
return res
}
}
On the other side I can put all the service logic into the resolver and just leave the mongodb part in the service, but then the services are small and just replacing a simple mongodb call. So why shouldn't I put that also to the resolver.
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
const data = await this.taskService.findOne(id)
let res
if (data.checkSomething) {
const update = { $set: { delete: true } }
res = this.taskService.updateOne(id, update)
}
return res
}
}
#Injectable()
export class TasksService {
findOne(id: string) {
const Tasks = this.db.collection('tasks')
return Task.findOne({ _ id: id })
}
updateOne(id: string, update) {
const Tasks = this.db.collection('tasks')
return Task.updateOne({ _ id: id }, update)
}
}
What is the correct usage of the resolver and service? In both cases one part keeps nearly a one liner for each function, so why should I split that at all?

You're right that it's a pretty linear call and there isn't much logic behind it, but the idea is to separate the concerns of each class. The resolver, much like a REST or RPC controller, should act as a gateway to your business logic, so that the logic can be easily re-used or re-called in other parts of the server. If you have a hybrid server with RPC or a REST + GQL combo, you could re-use the service to ensure both REST and GQL get the same return.
In the end, it comes down to your choice on what you want to do, but separating the resovler from the service (having thin gateways and fat logic classes) is Nest's opinion on the right design.

Your service help you fetch the data from Database. Your Resolver help to delivery these data to user. Sometimes, the data that you delivery to user not the same with data from Database, so the Resolver will make up these data as user's demand before sending it to user.

In terms of best practice, the resolver or controller should be thought of as the manager. In this case (as is stereotypical), the manager shouldn't be doing any of the actual work except for telling the workers what to do. The Manager determines who (which worker/service) should do the work. Sometimes it might be two or more workers/services. They specialize in telling "who", "what" to do.
The workers on the other hand, execute on the actual task. In your case, another option would be to have a database "repository" for database commands like findOne, findOneByX, updateOne; and also a service to handle the actual logic of the task. So the service worker takes the instructions from the manager(resolver) and only uses their logic to tell their database fetching repository buddies what to fetch.
In this way, the manager manages who should do the task. The service contains the logic and tells the other repository methods focused on database fetches what to fetch.
// So you would have...
task.resolver.ts
task.service.ts
task.repository.ts
The task.resolver will contain one line that calls the task.service method
The task.service will contain the logic to manage the task
The task.repository will contain methods like what you have in your suggested task.service - essentially database only methods

Related

Where to initOracleClient in REST MVC Architecture

I have Express Routing set up with multiple routes, each using a different Oracle connection. I have to call initOracleClient prior to getConnection, however I get an error (Error: NJS-077: Oracle Client library has already been initialized) when I try to initOracleClient in both routes. I've tried moving the initOracleClient to different locations in the structure; both at the app level and route level. Where in a REST MVC structure do you initialize the client?
A REST MVC application typically has some supporting infrastructure. That is to say, MVC is not a complete blueprint on how to structure the entirety of your program's code - only a general rule of thumb of how to assign certain responsibilities.
The library you're using needs initialization, and apparently this code should execute only once. There are several ways to go about it:
Initialize the client once before starting up the express server, and then pass in the ready-to-use client for use by route handlers. This may be the easiest to use, but must necessarily delay the .listen() call - so the time until your application starts responding to HTTP may be longer.
Use a pattern known as the Singleton to allow route handlers to initialize the client, but only execute the initialization once under the hood. Depending on how exactly the library is initialized (does it return a Promise? does it use a callback?), this may require some careful design - for example, you may need to store and return a Promise instance, so multiple consumers will be calling .then() on the same Promise.
I implemented the Singleton pattern as suggested:
import oracledb from 'oracledb';
class PrivateOraInitSingleton {
constructor() {
try {
oracledb.initOracleClient({libDir: '/usr/local/lib/instantclient_19_8'});
} catch (err) {
console.error(err);
process.exit(1);
}
}
}
class OraInitSingleton {
constructor() {
throw new Error('Use OraInitSingleton.getInstance()');
}
static getInstance() {
if (!OraInitSingleton.instance) {
OraInitSingleton.instance = new PrivateOraInitSingleton();
}
return OraInitSingleton.instance;
}
}
export default OraInitSingleton;
Usage:
const object = OraInitSingleton.getInstance();
try {
connectionPromise = oracledb.getConnection({
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
connectString : process.env.CONNECT_STRING
});
} catch (err) {
console.error(err);
process.exit(1);
}

What is the difference if use Subject over a Get Service method in Angular?

I have a service and I have 2 solutions from this kind of problem and I want to know what is best and when to use the Subject solution above the Service solution.
I have a UserModel that all my components see with my service, the approach that I want is when I change the UserModel from service, changes it for all my application
1 FIRST SERVICE
export class UserService {
private userModel: UserModel = new UserModel();
public userSubject$ = new Subject<any>();
private timeOut = 20000;
private mainConfig: MainConfig;
constructor(private http: HttpClient) {
this.mainConfig = new MainConfig();
}
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userModel = user
}
}
And is just to make this call in my HTML from all my components and will work
this.userService.getUserModel().name
The second approach
2 SECOND SERVICE
#Injectable()
export class UserService {
private userModel: UserModel = new UserModel();
public userSubject$ = new Subject<any>();
private timeOut = 20000;
private mainConfig: MainConfig;
constructor(private http: HttpClient) {
this.mainConfig = new MainConfig();
}
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userSubject$.next(this.userModel = user);
}
}
And in my HTML file, I just use
{{ userModel.name }}
And I must make this new line on my example-component.ts
ngOnInit
this.subTemp = this.userService.userSubject$.subscribe(
user => this.userModel = user
);
ngOnDestroy
this.subTemp.unsubscribed();
What is the advantage to make the Subject from direct from Service? Because I need to make much more work
If I could paraphrase your question(s), I'm guessing it'd go something like:
Why should I use Angular Services instead of just making async/http calls directly from the component?/Why should I write Service logic in a separate file as a dependency?
and
Why should I use lifecycle methods like ngOnInit and ngOnDestroy in conjunction with Services or async/http calls?
When it comes to questions like these, the Angular framework is more opinionated than other SPA technologies like React, Vue, etc. So while you're not technically forced to follow either of the approaches you listed, you should know of the downsides and problems that emerge if you follow the first approach rather than the traditional injectable Service approach(number 2).
Generally speaking, the Angular team recommends following a unidirectional data flow pattern in your app implemented with Services. This means that data flow should generally come from Services which distribute the data to components and then to view templates.
Within this pattern, there's also an implication of separation of concerns which is a good practice to follow within any app. Services should handle fetching and handling data, components should handle view logic, and templates should be as clean and declarative as possible. Components and their templates should consume data that's been processed already. Relatedly, you should try to keep your components as pure as possible - meaning they produce as few side effects as possible. This is because components are dynamically mounted and unmounted in the course of a user session. Have a look at this article for more information on pure components.
Aside from the above architectural discussion of Services there are some other, more concrete consequences to be aware of:
Failure to unsubscribe from observables can lead to memory leaks in your application. With the first scenario you've outlined above, a component may be loaded 10-20 times in a user session and each time you're setting up a new subscription without tearing it down again. This can have a very real performance impact on your app.
The Angular compiler is optimized to add and remove dependencies dynamically, resulting in better app performance. If you keep all your Service code right in your component, they'll be larger and slower. From a UX perspective, components should be as light and nimble as possible so they can load quickly for the user.
If you register a service as a provider, the Angular compiler will treat it as a singleton meaning there can be only one instance of it. This is as opposed to the many instances of a Service class generated with each component if you were to use the first approach you listed. This is another performance benefit of using injectable Services.
The Angular compiler is optimized to work with the DI framework so your next step may be to learn more about it and the implications of going with one approach or the other. There's a long talk about creating your own Angular Compiler that's a couple years old now that might be helpful.
What you wish to know is the difference between pull based method vs push based method of retrieving data.
Method 1: pull based
As the name suggests the pull based method is traditional method where you for eg. call a function and it returns the value once. If you need the value again, the function should be called again. And you exactly when the data will arrive.
export class UserService {
private userModel: UserModel = new UserModel();
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userModel = user
}
}
some.component.ts
export class SomeComponent implements OnInit {
userModel: UserModel;
constructor(private _userService: UserService) { }
ngOnInit() {
// It's a one time call and you control when you get (or `pull`) the data
this.userModel = this._userService.getUserModel();
}
}
Method 2: push based
Here the observable decides when you receive the data. This is the basic of reactive/asynchronous data flow. You subscribe to the data source and wait till it pushes the data. You have no knowledge when the result might arrive.
#Injectable()
export class UserService {
public userSubject$ = new Subject<any>();
getUserModel() {
return this.userSubject$.asObservable();
}
setUserModel(user) {
this.userSubject$.next(this.userModel = user);
}
}
some.component.ts
export class SomeComponent implements OnInit, OnDestroy {
userModel: UserModel;
closed$ = new Subject<any>();
constructor(private _userService: UserService) { }
ngOnInit() {
// The stream is open until closed and the service/observable decide when it sends (or `pushes`) the data
this._userService.getUserModel().pipe(
takeUntil(this.closed$) // <-- close the `getUserModel()` subscription when `this.closed$` is complete
).subscribe(
userModel => { this.userModel = userModel }
);
}
ngOnDestroy() {
this.closed$.next();
this.closed$.complete();
}
}
Angular uses observables extensively due to the nature of data flow in a typical web-application and the flexibility it provides.
For eg. the HTTP client returns an observable that you can latch on to and wait till the server returns any information. And the RxJS provides numerous operators and functions to refine and adjust the data flow.

Test Meteor server method calling in client code with authenticated users

In a Meteor app, I need to test some client code that has statements such as
Meteor.call('foo', param1, param2, (error, result) => { .... });
And, in these methods, I have security checks to make sure that the method can only be called by authenticated users. However, all these tests fail during tests because no user is authenticated.
In each server methods, I check users like this
if (!Roles.userIsInRole(this.userId, [ ...roles ], group)) {
throw new Meteor.Error('restricted', 'Access denied');
}
I have read that we should directly export the server methods and test them directly, and I actually do this for server methods testing, but it is not possible, here, since I need to test client code that depend on Meteor.call.
I also would certainly not want to have if (Meteor.isTest || Meteor.isAppTest) { ... } all over the place....
I thought perhaps wrapping my exported methods like this :
export default function methodsWrapper(methods) {
Object.keys(methods).forEach(method => {
const fn = methods[method];
methods[method] = (...args) => {
const user = Factory.create('user', { roles: { 'default': [ 'admin' ] } });
return fn.call({ userId: user._id }, ...args);
};
});
};
But it only works when calling the methods directly.
I'm not sure how I can test my client code with correct security validations. How can I test my client code with authenticated users?
Part I: Making the function an exported function
You just need to add the exported method also to meteor methods.
imports/api/foo.js
export const foo = function(param1, param2){
if (!Roles.userIsInRole(this.userId, [ ...roles ], group)) {
throw new Meteor.Error('restricted', 'Access denied');
}
//....and other code
};
This method can then be imported in your server script:
imports/startup/methods.js
import {foo} from '../api/foo.js'
Meteor.methods({
'foo' : foo
});
So it is available to be called via Mateor.call('foo'...). Note that the callback has not to be defined in foo's function header, since it is wrapped automatically by meteor.
imports/api/foo.tests.js
import {foo} from './foo.js'
if (Meteor.isServer) {
// ... your test setup
const result = foo(...) // call foo directly in your test.
}
This is only on the server, now here is the thing for testing on the client: you will not come around calling it via Meteor.call and test the callback result. So on your client you still would test like:
imports/api/foo.tests.js
if (Meteor.isClient) {
// ... your test setup
Meteor.call('foo', ..., function(err, res) {
// assert no err and res...
});
}
Additional info:
I would advice you to use mdg:validated-method which allows the same functionality above PLUS gives you more sophisticated control over method execution, document schema validation and flexibility. It is also documented well enough to allow you to implement your above described requirement.
See: https://github.com/meteor/validated-method
Part II: Running you integration test with user auth
You have two options here to test your user authentication. They have both advantages and disadvantages and there debates about what is the better approach. No mater which one of both you will test, you need to write a server method, that adds an existing user to given set of roles.
Approach 1 - Mocking Meteor.user() and Meter.userid()
This is basically described/discussed in the following resources:
A complete gist example
An example of using either mdg:validated-method or plain methods
Using sinon spy and below also an answer from myself by mocking it manually but this may not apply for your case because it is client-only. Using sinon requires the following package: https://github.com/practicalmeteor/meteor-sinon
Approach 2 - Copying the "real" application behavior
In this case you completely test without mocking anything. You create real users and use their data in other tests as well.
In any case you need a server method, that creates a new user by given name and roles. Note that it should only be in a file with .test.js as name. Otherwise it can be considered a risk for security.
/imports/api/accounts/accounts.tests.js
Meteor.methods({
createtestUser(name,password, roles, group);
const userId = Accounts.createUser({username:name, password:password});
Roles.addUserToRoles(userId, roles, group);
return userId;
});
Note: I often heard that this is bad testing, which I disagree. Especially integration testing should mime the real behavior as good as possible und should use less mocking/spying as unit tests do.

Angular4 Rxjs Observable Limit Backend HTTP Requests

I am trying to limit the number of unnecessary HTTP calls in my application but everytime I subscribe to an Observable there is a request being made to the server. Is there a way to subscribe to an observable without firing http request? My observable in service looks like that:
getServices(): Observable<Service[]> {
return this.http.get(this.serviceUrl).map(res => res.json())._catch(err => err);
}
Then in my component I subscribe to that observable like this:
this.serviceService.getServices().subscribe(services => this.services = services);
What I would like to achieve is to store data somehow on the service itself (so that I can use that data throughout the whole application without making requests on every component separately, but I would also like to know when that data is received by components (which I usually do by subscription).
Not sure if I understand your question correctly, but it seems that you want to cache the results of the first HTTP request in the service, so if the first component fetch the data, the second one (another subscriber) would retrieve the cached data. If that's the case, declare a service provider on the module level so Angular creates a singleton object. Then inside the service create a var that stores the data after the first retrieval.
#Injectable()
export class DataService{
mydata: Array<string>[];
constructor(private http:Http){}
getServices(): Observable<string[]> {
if (this.mydata){
return Observable.from(this.mydata); // return from cache
} else
{
return return this.http.get(this.serviceUrl).map(res => res.json())._catch(err => err);
}
}
}

Using Apollo, how can I distinguish newly created objects?

My use case is the following:
I have a list of comments that I fetch using a GraphQL query. When the user writes a new comment, it gets submitted using a GraphQL mutation. Then I'm using updateQueries to append the new comment to the list.
In the UI, I want to highlight the newly created comments. I tried to add a property isNew: true on the new comment in mutationResult, but Apollo removes the property before saving it to the store (I assume that's because the isNew field isn't requested in the gql query).
Is there any way to achieve this?
Depends on what do you mean by "newly created objects". If it is authentication based application with users that can login, you can compare the create_date of comment with some last_online date of user. If the user is not forced to create an account, you can store such an information in local storage or cookies (when he/she last time visited the website).
On the other hand, if you think about real-time update of comments list, I would recommend you take a look at graphql-subscriptions with use of websockets. It provides you with reactivity in your user interface with use of pub-sub mechanism. Simple use case - whenever new comment is added to a post, every user/viewer is notified about that, the comment can be appended to the comments list and highlighted in a way you want it.
In order to achieve this, you could create a subscription called newCommentAdded, which client would subscribe to and every time a new comment is being created, the server side of the application would notify (publish) about that.
Simple implementation of such a case could look like that
const Subscription = new GraphQLObjectType({
name: 'Subscription',
fields: {
newCommentAdded: {
type: Comment, // this would be your GraphQLObject type for Comment
resolve: (root, args, context) => {
return root.comment;
}
}
}
});
// then create graphql schema with use of above defined subscription
const graphQLSchema = new GraphQLSchema({
query: Query, // your query object
mutation: Mutation, // your mutation object
subscription: Subscription
});
The above part is only the graphql-js part, however it is necessary to create a SubscriptionManager which uses the PubSub mechanism.
import { SubscriptionManager, PubSub } from 'graphql-subscriptions';
const pubSub = new PubSub();
const subscriptionManagerOptions = {
schema: graphQLSchema,
setupFunctions: {
newCommentAdded: (options, args) => {
newCommentAdded: {
filter: ( payload ) => {
// return true -> means that the subscrition will be published to the client side in every single case you call the 'publish' method
// here you can provide some conditions when to publish the result, like IDs of currently logged in user to whom you would publish the newly created comment
return true;
}
}
},
pubsub: pubSub
});
const subscriptionManager = new SubscriptionManager(subscriptionManagerOptions);
export { subscriptionManager, pubSub };
And the final step is to publish newly created comment to the client side when it is necessary, via above created SubscriptionManager instance. You could do that in the mutation method creating new comment, or wherever you need
// here newComment is your comment instance
subscriptionManager.publish( 'newCommentAdded', { comment: newComment } );
In order to make the pub-sub mechanism with use of websockets, it is necessary to create such a server alongside your main server. You can use the subscriptions-transport-ws module.
The biggest advantage of such a solution is that it provides reactivity in your application (real-time changes applied to comments list below post etc.). I hope that this might be a good choice for your use case.
I could see this being done a couple of ways. You are right that Apollo will strip the isNew value because it is not a part of your schema and is not listed in the queries selection set. I like to separate the concerns of the server data that is managed by apollo and the front-end application state that lends itself to using redux/flux or even more simply by managing it in your component's state.
Apollo gives you the option to supply your own redux store. You can allow apollo to manage its data fetching logic and then manage your own front-end state alongside it. Here is a write up discussing how you can do this: http://dev.apollodata.com/react/redux.html.
If you are using React, you might be able to use component lifecycle hooks to detect when new comments appear. This might be a bit of a hack but you could use componentWillReceiveProps to compare the new list of comments with the old list of comments, identify which are new, store that in the component state, and then invalidate them after a period of time using setTimeout.
componentWillReceiveProps(newProps) {
// Compute a diff.
const oldCommentIds = new Set(this.props.data.allComments.map(comment => comment.id));
const nextCommentIds = new Set(newProps.data.allComments.map(comment => comment.id));
const newCommentIds = new Set(
[...nextCommentIds].filter(commentId => !oldCommentIds.has(commentId))
);
this.setState({
newCommentIds
});
// invalidate after 1 second
const that = this;
setTimeout(() => {
that.setState({
newCommentIds: new Set()
})
}, 1000);
}
// Then somewhere in your render function have something like this.
render() {
...
{
this.props.data.allComments.map(comment => {
const isNew = this.state.newCommentIds.has(comment.id);
return <CommentComponent isNew={isNew} comment={comment} />
})
}
...
}
The code above was right off the cuff so you might need to play around a bit. Hope this helps :)

Categories