Difference forRoot and forFeature [Nest JS] - javascript

I would like to understand the difference between forRoot and forFeature in nest js dynamic modules.
I also would like to understand this difference in the case of the TypeOrm dynamic module used with nestjs.

Generally speaking, as this does not always hold true, forRoot/register is a way to provide the configuration the module is going to use whereas forFeature is away to create a dynamic provider that has it's own injection token.
In the case of the TypeOrmModule as you mentioned, forRoot() sets up the connection information that Nest makes use of, and Nest creates the injection token for the connection that is created. For forFeature, Nest takes that connection injection token under the hood and creates the injection token and custom provider for the repositories that were passed n. The token usually looks like <EntityName>Repository, and uses a factory under the hood to inject the connection and get the repository out from the TypeORM system so it can be injected into your regular services.

From nestjs discord,
forRoot/forRootAsync: configure a module one time. This is either for a global service, or a re-used configuration internally
forFeature/forFeatureAsync: make use of the configuration from forRoot/forRootAsync for a specific provider. This usually creates an injection token.
register/registerAsync: a module that can be registered multiple times with different configurations each time.

The official explanation (https://docs.nestjs.com/fundamentals/dynamic-modules#community-guidelines) is also helpful.
Use forRoot if you are expecting to configure a dynamic module once and reuse that configuration in multiple places.
Use forFeature to use the configuration of a dynamic module's forRoot but need to modify some configuration specifics in the calling module.

Related

How to mock library by SinonJs?

I have a file: browser-launcher.ts
import * as Browser from "#lib/browser";
class BrowserLauncher {
launch(options) {
browser = Browser(options);
}
}
export const browserLauncher = new BrowserLauncher()
How to mock library '#lib/browser' to test method launch of BrowserLauncher?
How to right to test variable browserLauncher in module browser-launcher.ts?
Sinon is not really a library that deals with your specific problem, which is substituting a module for a fake one at the module loader level, as that is very specific to the environment in which you are working. Sinon only deals with the actual creation of a substitute (a mock or stub implementation). Such module substitution is solved by specific third party libraries such as proxyquire, rewire and the likes and the dependant module you want to replace is called a "link seam" in testing literature.
You can see a how-to by us in the Sinon team for how to do this in CommonJS environments: https://sinonjs.org/how-to/link-seams-commonjs/.
Seeing the #lib/... string makes me think this is a webpack specific problem, in which case you should find some module replacement library that deals with webpack. One such library is inject-loader.
That being said, sometimes Sinon can be used to replace exports on some module, but this is totally environment specific! Spec conforming ES Modules export an immutable namespace, so you are not supposed to be able to override exports.
You can see Sinon's expected behavior in our test code, where you see that an export such as export default { foo(){} } can be stubbed (since its members are not immutable), whereas export function foo(){} cannot be stubbed.
The only way you can stub the exports then is by having your ES environment be non-compliant: producing exports that are writable or disabling the read-only nature.
Testing the module without using link seams
In another answer I detail two ways you can use plain dependency injection to do this without external tooling. You should check that out, as it is quite straight forward, but the downside is that it will require a small test-only change to your production code to facilitate it. It's a balancing act of pros and cons: change code or introduce more dependencies.
I have written a more elaborate example of this technique on the Sinon issue tracker you might want to check out, where I show how to optionally inject the dependencies.

NestJS: Using forRoot / forChild in custom module - race condition?

Repo is available here to highlight the issue.
I am having a problem with a race condition. I have created a ConfigModule - this has a forRoot and a forChild.
The forRoot sets up the loading of the .env file and the forChild uses it in another module.
The problem is that forChild is called before forRoot. The ConfigService would be injected with missing config because forRoot hasn't executed first.
> AppModule > ConfigModule.forRoot InstanceModule >
> ConfigModule.forChild
I placed some simple console.log commands that output this
I am in Config Module FOR CHILD
I am in Config Module FOR ROOT
As you can see the forChild is being executed first, I tried using forwardRef and that didn't work.
If you let the application run you will see
[2019-03-24T11:49:33.602] [ERROR] ConfigService - There are missing mandatory configuration: Missing PORT
[2019-03-24T11:49:33.602] [FATAL] ConfigService - Missing mandatory configuration, cannot continue!, exiting
This is because I check that some process.env are available which are loaded in via dotenv. Of course, because the forRoot isn't executed first then the forChild returns its own new instance of the ConfigService.
ConfigService validates the availability of the environment variables.
So, basically, the forChild is executing and returning its own ConfigService before forRoot.
TO make it work, if you comment out the InstanceModule inside the AppModule then it will automatically start listening and returns the port number from an environment variable.
Of course, because InstanceModule uses the forChild - there is a race condition.
Why does this not work?
1) Nest builds up a dependency graph and instantiates the given modules and their providers according to this graph. The order of your imports or the naming of your dynamic modules's methods (forRoot/forChild) does not influence the order of the instantiation.
2) When you create dynamic modules, each module will be its own instance, they won't be singletons like regular modules. In your case, you'd create two different ConfigModule instances and with it, two different ConfigService instances; hence they won't share your .env configuration. This can't work, independent of the instantiation order.
Alternatives
Have a look at the nestjs/typeorm package. Under the hood, it creates a shared TypeOrmCoreModule, which is shared between the different dynamic module instances created by TypeOrmModule.forRoot / TypeOrmModule.forChild. For it to be shared and dynamic at the same time, it has to be made #Global though. In your case, since you don't have any configurations in your forChild imports, you would just make the whole ConfigModule global and then omit the forChild() imports, since the ConfigService would be globally available anyway.
If you don't want your service to be globally available, you could initialize your service after the startup process, for example in your AppModule's onModuleInit method.

Angular: Where to put reusable configuration

Just getting started with Angular:
I'd like to create a shared configuration singleton containing, eg the baseURL for service endpoints.
What is the pattern for this in Angular?
I recommend to use something like a constant https://docs.angularjs.org/api/ng/type/angular.Module#constant. This can be injected into any part of your app, even during the config phase.

Does AMD format require that modules are singletons?

Is there anything in the AMD specification to say that require'd modules must be supplied with the same object? It seems to be fairly common practice to assume that a require'd module is a single instance supplied to all requiring modules, but is there anything to prevent a module loader treating loaded modules as merely cached (possibly reloading them at some point)?
For example (hypothetically speaking), would an AMD loader be guaranteed to distribute the same instance of a message bus module across various different dependent modules, so they could use it to send each other messages?
Yes, modules should be singletons.
From the spec:
define() function
...
factory
The third argument, factory, is a function that should be executed to instantiate the module or an object. If the factory is a function it should only be executed once. If the factory argument is an object, that object should be assigned as the exported value of the module.

Require pattern Browserify / Angular

I'm working in a project with angular and browserify, this is the first time for me to use this two tools together, so I would like some advice on which is the way to require files with browserify.
We may import those files in different ways, Until now I experimented this way:
Angular App:
app
_follow
- followController.js
- followDirective.js
- followService.js
- require.js
- app.js
For each folder with in the files for a plugin I created an require.js file and in it I require all the files of that folder. Like so:
var mnm = require('angular').module('mnm');
mnm.factory('FollowService', ['Restangular',require('./followService')]);
mnm.controller('FollowController',['$scope','FollowService',require('./followController')])
mnm.directive('mnmFollowers', ['FollowService',require('./followDirective')]);
and then require all require.js files in a unique file called app.js that will generate the bundle.js
Question:
This way to require the files can be a good structure, or it will have some problem when I need to test? I would like to see your way to achieve good structure with angular and browserify
AngularJS and browserify aren't, sadly, a great match. Certainly not like React and browserify, but I digress.
What has worked for me is having each file as an AngularJS module (because each file is already a CommonJS module) and having the files export their AngularJS module name.
So your example would look like this:
app/
app.js
follow/
controllers.js
directives.js
services.js
index.js
The app.js would look something like this:
var angular = require('angular');
var app = angular.module('mnm', [
require('./follow')
]);
// more code here
angular.bootstrap(document.body, ['mnm']);
The follow/index.js would look something like this:
var angular = require('angular');
var app = angular.module('mnm.follow', [
require('./controllers'),
require('./directives'),
require('./services')
]);
module.exports = app.name;
The follow/controllers.js would look something like this:
var angular = require('angular');
var app = angular.module('mnm.follow.controllers', [
require('./services'), // internal dependency
'ui.router' // external dependency from earlier require or <script/>
// more dependencies ...
]);
app.controller('FollowController', ['$scope', 'FollowService', function ...]);
// more code here
module.exports = app.name;
And so on.
The advantage of this approach is that you keep your dependencies as explicit as possible (i.e. inside the CommonJS module that actually needs them) and the one-to-one mapping between CommonJS module paths and AngularJS module names prevents nasty surprises.
The most obvious problem with your approach is that you're keeping the actual dependencies that will be injected separate from the function that expects them, so if a function's dependencies change, you have to touch two files instead of one. This is a code smell (i.e. a bad thing).
For testability either approach should work as Angular's module system is essentially a giant blob and importing two modules that both define the same name will override each other.
EDIT (two years later): Some other people (both here and elsewhere) have suggested alternative approaches so I should probably address them and what the trade-offs are:
Have one global AngularJS module for your entire app and just do requires for side-effects (i.e. don't have the sub-modules export anything but manipulate the global angular object).
This seems to be the most common solution but kind of flies in the face of having modules at all. This seems to be the most pragmatic approach however and if you're using AngularJS you're already polluting globals so I guess having purely side-effect based modules is the least of your architectural problems.
Concatenate your AngularJS app code before passing it to Browserify.
This is the most literal solution to "let's combine AngularJS and Browserify". It's a valid approach if you're starting from the traditional "just blindly concatenate your app files" position of AngularJS and want to add Browserify for third-party libs, so I guess that makes it valid.
As far as your app structure goes this doesn't really improve anything by adding Browserify, though.
Like 1 but with each index.js defining its own AngularJS sub-module.
This is the boilerplate approach suggested by Brian Ogden. This suffers from all the drawbacks of 1 but creates some semblance of hierarchy within AngularJS in that at least you have more than one AngularJS module and the AngularJS module names actually correspond to your directory structure.
However the major drawback is that you now have two sets of namespaces to worry about (your actual modules and your AngularJS modules) but nothing enforcing consistency between them. Not only do you have to remember to import the right modules (which again purely rely on side-effects) but you also have to remember to add them to all the right lists and add the same boilerplate to every new file. This makes refactoring incredibly unwieldy and makes this the worst option in my opinion.
If I had to chose today, I would go with 2 because it gives up all pretense of AngularJS and Browserify being able to be unified and just leaves both to do their own thing. Plus if you already have an AngularJS build system it literally just means adding an extra step for Browserify.
If you're not inheriting an AngularJS code base and want to know which approach works best for starting a new project instead: don't start a new project in AngularJS. Either pick Angular2 which supports a real module system out of the box, or switch to React or Ember which don't suffer from this problem.
I was trying to use browserify with Angular but found it got a bit messy. I didn't like the pattern of creating a named service / controller then requiring it from another location, e.g.
angular.module('myApp').controller('charts', require('./charts'));
The controller name / definition is in one file, but the function itself is in another. Also having lots of index.js files makes it really confusing if you lots of files open in an IDE.
So I put together this gulp plugin, gulp-require-angular which allows you write Angular using standard Angular syntax, all js files which contain angular modules and dependencies of angular modules which appear in your main module dependency tree are require()'d into a generated entry file, which you then use as your browserify entry file.
You can still use require() within your code base to pull in external libraries (e.g lodash) into services / filters / directives as needed.
Here's the latest Angular seed forked and updated to use gulp-require-angular.
I've used a hybrid approach much like pluma. I created ng-modules like so:
var name = 'app.core'
angular.module(name, [])
.service('srvc', ['$rootScope', '$http', require( './path/to/srvc' ))
.service('srvc2', ['$rootScope', '$http', require( './path/to/srvc2' ))
.config...
.etc
module.exports = name
I think the difference is that I don't define individual ng-modules as dependencies to the main ng-module, in this case I wouldn't define a Service as an ng-module and then list it as a dep of the app.core ng-module. I try to keep it as flat as possible:
//srvc.js - see below
module.exports = function( $rootScope, $http )
{
var api = {};
api.getAppData = function(){ ... }
api.doSomething = function(){ ... }
return api;
}
Regarding the comment of code-smell, I disagree. While it's an extra step, it allows for some great configurability in terms of testing against mock-services. For instance I use this quite a bit for testing agains services that might not have an existing server-API ready:
angular.module(name, [])
// .service('srvc', ['$rootScope', '$http', require( './path/to/srvc' ))
.service('srvc', ['$rootScope', '$http', require( './path/to/mockSrvc' ))
So any controller or object dependent on srvc doesn't know which it is getting. I could see this getting a bit convoluted in terms of services being dependent on other services, but that to me is bad design. I prefer to use ng's event system to communicate betw. services so that you keep their coupling down.
Alan Plum's answer is just not a great answer or at least not a great demonstration of CommonJS modules and Browserify with Angular. The claim that Browserify does not mix well with Angular, compared to React is just not true.
Browserify and a CommonJS module pattern work great with Angular, allowing you to organize by features instead of types, keep vars out of global scope and share Angular Modules across apps easily. Not to mention you do not need to ever add a single <script> to your HTML ever again thanks to Browserify finding all your dependencies.
What is particular flawed in Alan Plum's answer is not letting requires in each index.js for each folder dictate dependencies for Angular modules, controllers, services, configurations, routes etc. There is no need for a single require in the Angular.module instantiation, nor a single module.exports as in the context that Alan Plum's answer suggests.
See here for a better module pattern for Angular using Browserify: https://github.com/Sweetog/yet-another-angular-boilerplate

Categories