Select AuthGuard type on the fly - javascript

My app will use two different auth strategies - one for users using a browser and another for the public API. I'll set a header for those using a browser, and then my app will set the auth strategy based on the value of that header.
I have set up the two auth strategies, and given them names. I can now do this in my controller methods:
#Get()
#UseGuards(AuthGuard('strategy_name'))
async find() { }
What I would like to do, is NOT have to specify the auth guard type next to every controller method, nor the logic for determining which type to use. Instead, I'd like to put this logic in one place, which will be read by ALL calls to AuthGuard().
What's the best way to do this? Is there some kind of filter/hook/interceptor for AuthGuard?

You can create a new Guard that acts as a delegate and chooses the proper AuthGuard
(and therewith AuthStrategy) based on your condition.
#Injectable()
export class MyAuthGuard implements CanActivate {
canActivate(context: ExecutionContext): boolean | Promise<boolean> | Observable<boolean> {
const guard = this.getAuthGuard(context);
return guard.canActivate(context);
}
private getAuthGuard(context: ExecutionContext): IAuthGuard {
const request = context.switchToHttp().getRequest();
// Here should be your logic to determine the proper strategy.
if (request.header('myCondition')) {
return new (AuthGuard('jwt'))();
} else {
return new (AuthGuard('other-strategy'))();
}
}
Then use it instead of the standard AuthGuard in your controller:
#UseGuards(MyAuthGuard)
#Get('user')
getUser(#User() user) {
return {user};
}

Related

What is the difference of resolver/service in nestJS using graphql?

I do not understand the difference between a resolver and a service in a nestJS application using graphQl and mongoDB.
I found examples like this, where the resolver just calls a service, so the resolver functions are always small as they just call a service function. But with this usage I don't understand the purpose of the resolver at all...
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
return this.taskService.deleteTask(id);
}
}
#Injectable()
export class TasksService {
deleteTask(id: string) {
// Define collection, get some data for any checking and then update dataset
const Tasks = this.db.collection('tasks')
const data = await Task.findOne({ _ id: id })
let res
if (data.checkSomething) res = Task.updateOne({ _id: id }, { $set: { delete: true } })
return res
}
}
On the other side I can put all the service logic into the resolver and just leave the mongodb part in the service, but then the services are small and just replacing a simple mongodb call. So why shouldn't I put that also to the resolver.
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
const data = await this.taskService.findOne(id)
let res
if (data.checkSomething) {
const update = { $set: { delete: true } }
res = this.taskService.updateOne(id, update)
}
return res
}
}
#Injectable()
export class TasksService {
findOne(id: string) {
const Tasks = this.db.collection('tasks')
return Task.findOne({ _ id: id })
}
updateOne(id: string, update) {
const Tasks = this.db.collection('tasks')
return Task.updateOne({ _ id: id }, update)
}
}
What is the correct usage of the resolver and service? In both cases one part keeps nearly a one liner for each function, so why should I split that at all?
You're right that it's a pretty linear call and there isn't much logic behind it, but the idea is to separate the concerns of each class. The resolver, much like a REST or RPC controller, should act as a gateway to your business logic, so that the logic can be easily re-used or re-called in other parts of the server. If you have a hybrid server with RPC or a REST + GQL combo, you could re-use the service to ensure both REST and GQL get the same return.
In the end, it comes down to your choice on what you want to do, but separating the resovler from the service (having thin gateways and fat logic classes) is Nest's opinion on the right design.
Your service help you fetch the data from Database. Your Resolver help to delivery these data to user. Sometimes, the data that you delivery to user not the same with data from Database, so the Resolver will make up these data as user's demand before sending it to user.
In terms of best practice, the resolver or controller should be thought of as the manager. In this case (as is stereotypical), the manager shouldn't be doing any of the actual work except for telling the workers what to do. The Manager determines who (which worker/service) should do the work. Sometimes it might be two or more workers/services. They specialize in telling "who", "what" to do.
The workers on the other hand, execute on the actual task. In your case, another option would be to have a database "repository" for database commands like findOne, findOneByX, updateOne; and also a service to handle the actual logic of the task. So the service worker takes the instructions from the manager(resolver) and only uses their logic to tell their database fetching repository buddies what to fetch.
In this way, the manager manages who should do the task. The service contains the logic and tells the other repository methods focused on database fetches what to fetch.
// So you would have...
task.resolver.ts
task.service.ts
task.repository.ts
The task.resolver will contain one line that calls the task.service method
The task.service will contain the logic to manage the task
The task.repository will contain methods like what you have in your suggested task.service - essentially database only methods

Nest.js custom pipe validator not working for method with #Req() and #Res() parameters

I have defined custom pipe in Nest.js
#Injectable()
export class QueryValidationPipe implements PipeTransform<GetDto> {
public transform(value: GetDto, metadata: ArgumentMetadata): GetDto {
console.log('TEST');
(...)
return value;
}
I'm using it in controller for certain GET method in controller A. I'm using Nest.js #Query decorator:
#Get()
#UsePipes(new QueryValidationPipe())
#UseFilters(new ExceptionFilters())
public async getMethod(#Query() query: GetDto) {
(...)
}
And it works fine, for every request, pipe is logging test message to console. Now in controller B I'm using same decorator, but for method with underlying req and res object as parameters:
#Get()
#UsePipes(new QueryValidationPipe())
#UseFilters(new ExceptionFilters())
public async getMethodTo(#Req() req: GetRequest, #Res() res: Response): Promise<Response> {
(...)
}
Here it doesn't work. Pipe transform method is not called despite requests being received by above method. What can be cause of this? I didn't found any word in documentation about pipes not working when #Req and #Res parameters are used in method.
And if that's the case (it seems it is), how can I fix this? I can't just use #Query query method parameter because I need to send back different headers depending on query param.
Pipes only work for #Body(), #Param(), #Query() and custom decorators in the REST context. There's a brief mention here, but that should definitely get added to the docs.
Technically, you could make a custom decorator for req and res and get pipes to run for them.
export const ReqDec = createParamDecorator(
(data: unknown, ctx: ExecutionContext) => {
const request = ctx.switchToHttp().getRequest();
return request;
},
);
And then use it as #ReqDec()

What is the difference if use Subject over a Get Service method in Angular?

I have a service and I have 2 solutions from this kind of problem and I want to know what is best and when to use the Subject solution above the Service solution.
I have a UserModel that all my components see with my service, the approach that I want is when I change the UserModel from service, changes it for all my application
1 FIRST SERVICE
export class UserService {
private userModel: UserModel = new UserModel();
public userSubject$ = new Subject<any>();
private timeOut = 20000;
private mainConfig: MainConfig;
constructor(private http: HttpClient) {
this.mainConfig = new MainConfig();
}
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userModel = user
}
}
And is just to make this call in my HTML from all my components and will work
this.userService.getUserModel().name
The second approach
2 SECOND SERVICE
#Injectable()
export class UserService {
private userModel: UserModel = new UserModel();
public userSubject$ = new Subject<any>();
private timeOut = 20000;
private mainConfig: MainConfig;
constructor(private http: HttpClient) {
this.mainConfig = new MainConfig();
}
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userSubject$.next(this.userModel = user);
}
}
And in my HTML file, I just use
{{ userModel.name }}
And I must make this new line on my example-component.ts
ngOnInit
this.subTemp = this.userService.userSubject$.subscribe(
user => this.userModel = user
);
ngOnDestroy
this.subTemp.unsubscribed();
What is the advantage to make the Subject from direct from Service? Because I need to make much more work
If I could paraphrase your question(s), I'm guessing it'd go something like:
Why should I use Angular Services instead of just making async/http calls directly from the component?/Why should I write Service logic in a separate file as a dependency?
and
Why should I use lifecycle methods like ngOnInit and ngOnDestroy in conjunction with Services or async/http calls?
When it comes to questions like these, the Angular framework is more opinionated than other SPA technologies like React, Vue, etc. So while you're not technically forced to follow either of the approaches you listed, you should know of the downsides and problems that emerge if you follow the first approach rather than the traditional injectable Service approach(number 2).
Generally speaking, the Angular team recommends following a unidirectional data flow pattern in your app implemented with Services. This means that data flow should generally come from Services which distribute the data to components and then to view templates.
Within this pattern, there's also an implication of separation of concerns which is a good practice to follow within any app. Services should handle fetching and handling data, components should handle view logic, and templates should be as clean and declarative as possible. Components and their templates should consume data that's been processed already. Relatedly, you should try to keep your components as pure as possible - meaning they produce as few side effects as possible. This is because components are dynamically mounted and unmounted in the course of a user session. Have a look at this article for more information on pure components.
Aside from the above architectural discussion of Services there are some other, more concrete consequences to be aware of:
Failure to unsubscribe from observables can lead to memory leaks in your application. With the first scenario you've outlined above, a component may be loaded 10-20 times in a user session and each time you're setting up a new subscription without tearing it down again. This can have a very real performance impact on your app.
The Angular compiler is optimized to add and remove dependencies dynamically, resulting in better app performance. If you keep all your Service code right in your component, they'll be larger and slower. From a UX perspective, components should be as light and nimble as possible so they can load quickly for the user.
If you register a service as a provider, the Angular compiler will treat it as a singleton meaning there can be only one instance of it. This is as opposed to the many instances of a Service class generated with each component if you were to use the first approach you listed. This is another performance benefit of using injectable Services.
The Angular compiler is optimized to work with the DI framework so your next step may be to learn more about it and the implications of going with one approach or the other. There's a long talk about creating your own Angular Compiler that's a couple years old now that might be helpful.
What you wish to know is the difference between pull based method vs push based method of retrieving data.
Method 1: pull based
As the name suggests the pull based method is traditional method where you for eg. call a function and it returns the value once. If you need the value again, the function should be called again. And you exactly when the data will arrive.
export class UserService {
private userModel: UserModel = new UserModel();
getUserModel() {
return this.userModel;
}
setUserModel(user) {
this.userModel = user
}
}
some.component.ts
export class SomeComponent implements OnInit {
userModel: UserModel;
constructor(private _userService: UserService) { }
ngOnInit() {
// It's a one time call and you control when you get (or `pull`) the data
this.userModel = this._userService.getUserModel();
}
}
Method 2: push based
Here the observable decides when you receive the data. This is the basic of reactive/asynchronous data flow. You subscribe to the data source and wait till it pushes the data. You have no knowledge when the result might arrive.
#Injectable()
export class UserService {
public userSubject$ = new Subject<any>();
getUserModel() {
return this.userSubject$.asObservable();
}
setUserModel(user) {
this.userSubject$.next(this.userModel = user);
}
}
some.component.ts
export class SomeComponent implements OnInit, OnDestroy {
userModel: UserModel;
closed$ = new Subject<any>();
constructor(private _userService: UserService) { }
ngOnInit() {
// The stream is open until closed and the service/observable decide when it sends (or `pushes`) the data
this._userService.getUserModel().pipe(
takeUntil(this.closed$) // <-- close the `getUserModel()` subscription when `this.closed$` is complete
).subscribe(
userModel => { this.userModel = userModel }
);
}
ngOnDestroy() {
this.closed$.next();
this.closed$.complete();
}
}
Angular uses observables extensively due to the nature of data flow in a typical web-application and the flexibility it provides.
For eg. the HTTP client returns an observable that you can latch on to and wait till the server returns any information. And the RxJS provides numerous operators and functions to refine and adjust the data flow.

Usage of class in 'typing' the variable (vs interface) and 'modelling' (for Typescript Angular 2 project)

Let say I have a model class
class User {
name: string;
email: string;
}
and a service related to user
class UserService {
getInfo (usr: User) {
console.log(usr);
}
}
The usr: User is an input parameter for getInfo method, and no User class is created for that statement since the User just "typing" the usr. Instead, we can replace it with interface, isn't it?
Yeah, I am aware of Angular 2 official guide that suggest try to use class instead of interface
quote: "Consider using a class instead of an interface"
and I can accept it, although it is weird.
My question is, WHY WHY WHY use a class to "type" a variable, since we don't create any instance from it? We can replace the UserService class as shown below, and it works too...
class UserService {
getInfo (usr: any) { //this work too
console.log(usr);
}
}
(please correct me if I am wrong)
[#1] This will create a property that have the class content
class UserService {
usr: User = User; //usr = { name: '', email: ''}
}
[#2] This is used by Angular 2 to do dependency injection (assuming I have listed the User as provider), and in this case, it is similar to #1 as we are just copy the User to usr. (usually, we do DI on service, instead of model, since injecting model will have the same effect as "assigning" it to a variable, am I correct?)
class UserService {
constructor (private usr: User) {}
}
[#3] And this confuse me... because why we still do it since we don't need it? (because nothing is created, no usage here)
class UserService {
usr: User; //usr: any; (?)
}
class UserMoreService {
getInfo (usr: User) {console.log(usr);} //usr: any (?)
}
Lastly, may I know what is the motivation behind to create a "model" class in Angular 2 project?
We have the component to display data, and to get the data, we can ask it from related service, and the service will get it from the server.
Can you see this? we don't need a "third party" model to "hold" the data, because everything can be done without declaring a "model" class. Can please tell me when to create a "model"? I don't see any reason to create that...
It is not a must but just a suggestion.
Why?
An interface-class can be a provider lookup token in Angular dependency injection.
Source: https://angular.io/styleguide#!#03-03
Creating a class is pretty much the same work with respect to creating an interface but you can use these classes as a class-interface in your provider tokens later on.
{ provide: MinimalLogger, useExisting: LoggerService },
This can be used as a type of inheritance among Angular providers.
export abstract class MinimalLogger {
logs: string[];
logInfo: (msg: string) => void;
}
When you use a class this way, it's called a class-interface. The key benefit of a class-interface is that you can get the strong-typing of an interface and you can use it as a provider token in the way you would a normal class.
Source: https://angular.io/docs/ts/latest/cookbook/dependency-injection.html#!#class-interface
Side note: If you're sure that you aren't going to use it as a provider token or so IMO you should be using an interface since they disappear after the code is transpiled to JavaScript. Hence, they don't use your memory.
Using a class as an interface gives you the characteristics of an interface in a real JavaScript object.
Of course a real object occupies memory. To minimize memory cost, the class should have no implementation

Angular 2: how do I get route parameters from CanLoad implementation?

I've added a canLoad guard to a state that's lazy loaded. The problem that I'm having is that I can't get any route parameters if the state is being initialized from a different state using router.navigate().
So here is my route configuration:
path: 'programs/:programId/dashboard',
loadChildren: './program.module#ProgramModule',
canLoad: [ProgramGuard]
and this is the short version of ProgramGuard:
export class ProgramGuard implements CanLoad {
canLoad(route: Route): Observable<boolean> {
//route object doesn't have any reference to the route params
let programId = paramFromRoute;
return Observable.create(observer => {
if (programId == authorizedProgramId)
observer.complete(true);
else
observer.complete(false);
}
}
}
I have tried injecting ActivatedRoute to try to get them from there to get it from there, but nothing.
If the user types the URL in the browser, then there is no problem because I can extract the parameters from the location object. But when using route.navigate, the browser's location is still set to the previous state.
Any help or ideas will be greatly appreciated.
I tried to do something similar and ended up changing to a canActivate guard instead. Note also that the canLoad guards block any preloading that you may want to do.
In theory, if a user could not access a route, it would be great to not even load it. But it practice it seems to be too limited to allow making a determination.
Something you could try (I didn't think of it earlier when I was trying to do this) ... you could add a parent route (component-less) that has a canActivate guard that can check the parameters. Then route to the lazy loaded route if the user has authorization.
I was able to retrieve the path including the route parameters using The Location object.
canLoad() {
//dont even load the module if not logged in
if (!this.userSessionService.isSessionValid()) {
this.userSessionService.redirectUrl = this.location.path();
this.router.navigate(['/auth']);
return false;
}
return true;
}
You just need to inject the Location object in the constructor.
Now you can access the queryparams by using this snippet
this.router.getCurrentNavigation().extractedUrl.queryParams
inside the canLoad method and without losing lazyloading feature
I know its too late, but I found the solution that work like charm.
I hope this will help new members who face the same problem like me
canLoad(
route: Route,
segments: UrlSegment[]): Observable<boolean> | Promise<boolean> | boolean {
if (!this.auth.isLoggedIn()) {
this.route.navigate(['auth/login'], { queryParams: { redirect_url: '/' + segments[0].path } });
return false;
}
return true;
}
Why not building the url from the paths of the segments?
/**
* Test if the user as enough rights to load the module
*/
canLoad(route: Route, segments: UrlSegment[]): boolean | Observable<boolean> | Promise<boolean> {
// We build the url with every path of the segments
const url = segments.map(s => s.path).join('/')
// We handle the navigation here
return this.handleNavigation(url, route)
}
first you can declare variable Like the following :
routeSnapshot: ActivatedRouteSnapshot;
then in constructor call ActivatedRouteSnapshot class Like the following :
constructor(private routeSnapshot: ActivatedRouteSnapshot)
now you can use this.routeSnapshot into canLoad method

Categories