Clean Architecture in NodeJS, how do useCases interact with each other? - javascript

I am trying to implement the Clean Architecture by Bob Martin in my project and I have a question.
How do use-cases interact with each other?
For example:
I have a Department entity and Employee entity.
Department entity has a peopleCount field
Whenever a new Emplyoee is created it is also assigned to a Department, which means that peopleCount must increase by 1.
So how should that interaction between say addEmployee.js and editDepartment.js use-cases be?
Do I const editDepartment = require("../departments"); within my addEmployee.js and use it within addEmployee.js?
Do I inject it as a dependency and then use it?
Do I create a separate useCase increasePeopleCountInDepartmentById.js and require/inject that one? So that its something with a specific purpose and not the "general" editing.

How do use-cases interact with each other?
A use-case is a scenario in which a system receives an external request (such as user input) and, following a list of actions, responds to it (Wikipedia). Therefore, use-cases by definition cannot interact with each other. Moreover, they have not interest to interact with each other.
A use-case, be it addEmployee or editDepartment (depending on your system design), should orchestrate participating domain entities (employee and department). Again, mixing use-cases is irrelevant.
Here's how you can implement addEmployee:
// TODO: start database transaction
const newEmployee = employeeFactory.create(id, name, age, targetDepartmentId);
const department = departmentRepository.get(targetDepartmentId);
department.peopleCount = department.peopleCount + 1;
departmentRepository.save(department);
employeeRepository.add(newEmployee);
// TODO: commit transaction
Do I inject it as a dependency and then use it?
As can be inferred from my example, three objects are to be injected into use-case: employeeFactory, departmentRepository, employeeRepository.

Related

share firestore collection paths across admin and web

I'd like to make re-usable functions that get the Firestore Document/Collection reference across web and admin (node.js).
for example:
getUserDocumentReference(company: string, user: string) {
return firebase.collection("companies")
.doc(company)
.collection("users")
.doc(user);
}
This will reduce errors and coordinate changes across both environments.
Problem: Admin imports firestore from firebase-admin, and web imports from firebase.
I've tried making some class/function where I pass in my firestore reference, but it becomes a pain where I have to declare the return types:
const ref = (
getUserDocumentReference("a", "1") as
firebase.firestore.DocumentReference
)
.withConverter(converter)
Is there a smarter/cleaner way to do this without re-inventing the wheel (i.e. somehow passing an array or re-creating paths in a complex way)?
my current approach:
class FirestoreReferences {
constructor(firestore: firebase.firestore.Firestore
| admin.firestore.Firestore) {
this.firestore = firestore;
}
getUserDocumentReference(company: string, user: string): FirebaseFirestore.DocumentReference | firebase.firestore.DocumentReference {
return this.firestore.collection(...).doc(...);
}
}
Just found out about Typesaurus which provides generic types to share across web/admin!
The simplest answer: DO NOT use the .doc() or the .doc().ref. Use the .doc.ref.path - which is a string with the FULL PATH to the document. Save/share it as let refPath = whatever.doc().ref.path and re-build it as .doc(refPath) is either environment.
I DO NOT actually RECOMMEND this - it exposes your internal structure - but it isn't inherently insecure (your security rules better be taking care of that).
btw, I'm building an entire wrapper npm package (#leaddreamer/firebase-wrapper) for this specific purpose.
You should not do this. The Admin SDK is meant for server-side usage because it has full control over your entire project. If a user gets access to this, they have control over your app. Keep firebase and firebase-admin seperate.

Breeze JS: Is there a way to query entities from data.results?

I have a Breeze web api controller, with methods that accept parameters and do some work, filtering, sorting, etc, on the server.
On the querySucceeded, I'd like to do further querying to data.results. Is there a way to accomplish this? I got this working by exporting/importing data.results to a local manager, and do the projection from there. The projection is needed in order to use the observable collection in a vendor grid control.
var query = datacontext.EntityQuery.from("GetActiveCustomers")
.withParameters({ organizationID: "1" })
.toType("Customer")
.expand("Organization")
.orderBy('name');
var queryProjection = query
.select("customerID, organizationID, name, isActive, organization.name");
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var exportData = manager.exportEntities(data.results);
var localManager = breeze.EntityManager.importEntities(exportData);
var resultProjection = localManager.executeQueryLocally(queryProjection);
//This is the way I came up with to query data.results (exporting/importing the result entities to a local manager)
//Is there a better way to do this? Querying directly data.results. Example: data.results.where(...).select("customerID, organizationID...)
if (collectionObservable) {
collectionObservable(resultProjection);
}
log('Retrieved Data from remote data source',
data, true);
}
You've taken an interesting approach. Normally a projection returns uncacheable objects, not entities. But you casted this to Customer (with the toType clause) which means you've created PARTIAL Customer entities with missing data.
I must hope you know what you are doing and have no intention of saving changes to these customer entities while they remain partial else calamity may ensue.
Note that when you imported the selected Customers to the "localManager" you did not bring along their related Organization entities. That means an expression such as resultProjection[0].organization will return null. That doesn't seem correct.
I understand that you want to hold on to a subset of the Customer partial entities and that there is no local query that could select that subset from cache because the selection criteria are only fully known on the server.
I think I would handle this need differently.
First, I would bury all of this logic inside the DataContext itself; the purpose of a DataContext is to encapsulate the details of data access so that callers (such as ViewModels) don't have to know internals. The DataContext is an example of the UnitOfWork (UoW) pattern, an abstraction that helps isolate the data access/manipulation concerns from ViewModel concerns.
Then I would store it either in a named array of the DataContext (DC) or of the ViewModel (VM), depending upon whether this subset was of narrow or broad interest in the application.
If only the VM instance cares about this subset, then the DC should return the data.results and let the VM hold them.
I do not understand why you are re-querying a local EntityManager for this set nor for why your local query is ALSO appling a projection ... which would return non-entity data objects to the caller. What is wrong with returning the (partial) Customer entities.
It seems you intend to further filter the subset on the client. Hey ... it's a JavaScript array. You can call stuffArray.filter(filterFunction).
Sure that doesn't give you the Breeze LINQ-like query syntax ... but do you really need that? Why do you need ".select" over that set?
If that REALLY is your need, then I guess I understand why you're dumping the results into a separate EntityManager for local use. In that case, I believe you'll need more code in your query callback method to import the related Organization entities into that local EM so that someCustomer.organization returns a value. The ever-increasing trickiness of this approach makes me uncomfortable but it is your application.
If you continue down this road, I strongly encourage you to encapsulate it either in the DC or in some kind of service class. I wouldn't want my VMs to know about any of these shenanigans.
Best of luck.
Update 3 Oct 2013: Local cache query filtering on unmapped property
After sleeping on it, I have another idea for you that eliminates your need for a second EM in this use case.
You can add an unmapped property to the client-side Customer entity and set that property with a subset marker after querying the "GetActiveCustomers" endpoint on the server; you'd set the marker in the query callback.
Then you can compose a local query that filters on the marker value to ensure you only consider Customer objects from that subset.
Reference the marker value only in local queries. I don't know if a remote query filtering on the marker value will fail or simply ignore that criterion.
You won't need a separate local EntityManager; the Customer entities in your main manager carry the evidence of the server-side filtering. Of course the server will never have to deal with your unmapped property value.
Yes, a breeze local query can target unmapped properties as well as mapped properties.
Here's a small demonstration. Register a custom constructor like this:
function Customer() { /* Custom constructor ... which you register with the metadataStore*/
// Add unmapped 'subset' property to be queried locally.
this.subset = Math.ceil(Math.random() * 3); // simulate values {1..3}
}
Later you query it locally. Here are examples of queries that do and do not reference that property:
// All customers in cache
var x = breeze.EntityQuery.from("Customers").using(manager).executeLocally();
// All customers in cache whose unmapped 'subset' property === 1.
var y = breeze.EntityQuery.from("Customers")
.where("subset", 'eq', 1) // more criteria; knock yourself out
.using(manager).executeLocally();
I trust you'll know how to set the subset property appropriately in your callback to our "GetActiveCustomers" query.
HTH
Once you queried for some data breeze stores those entities on the local memory.
All you have to do is query locally when you need to filter the data some more.
You do this by specifying for the manager to query locally :
manager.executeQueryLocally(query);
Because querying from the database is done asynchronously you have to make sure that you retrieve from the local memory only if there is something there. Follow the promises.

Using Visualsearch.js in a backbone.js framework

First of all, thanks to the guys of DocumentCloud for releasing those two super-useful tools.
Here goes my question(s):
I'm trying to use visulasearch.js in a backbone.js app.
In my app I have a basic index.html and a myapp.js javascript file wich contains the main application done with backbone.js
I use CouchDB as data storage, and I successfully can retrieve in a restful way all the data to be put in the collection.
I must retrieve the search query given by visualsearch.js and use it to filter a collection.
I surely need a view for the searchbox, to trigger an event when enter is hit, but..
Should I initialze the searchbox externally to myapp.js, within an additional js file or my index.html page (as suggested in the visualsearch mini.tutorial)?
Or I should initialize it within the searchbox view (myapp.js)? This latter solution seems to be too tricky (it was what I was trying to do, but even when I succeed, it's too complicated and I lost the simplicity of bacbone mvc).
Let's say I succeed in retrieving the search string as a JSON object like {name:'Fat DAvid', address:'24, slim st', phone:'0098876534287'}. Once done that, which function can I use to retrieve, in the collection, only the models whose fields match with the given string. I understand that I should do a map or a filter, but those function seems to serve natively for slightly different tasks.
a. is it really the best way to filter results? It charges the client (which must filter the results), while making a new query (a view or a filter) to CouchDB would be quite simple and, considered the small amount of data and the low access rate to the site, not so expensive. However, making all the filtering action client-side, it's much simpler than making new view(or list or filters) in CouchDB and linking it the the backbone.js view
You can initialize your VisualSearch.js search box right in your myapp.js. Just make sure you keep a reference to it so you can then extract out the facets and values later.
For example:
var visualSearch = VS.init({...})
// Returns the unstructured search query
visualSearch.searchBox.value()
// "country: "South Africa" account: 5-samuel title: "Pentagon Papers""
// Returns an array of Facet model instances
visualSearch.searchQuery.facets()
// [FacetModel<country:"South Africa">,
// FacetModel<account:5-samuel>,
// FacetModel<title:"Pentagon Papers">]
If you have these models in a Backbone collection, you can easily perform a filter:
var facets = visualSearch.searchQuery.models;
_.each(facets, function(facet) {
switch (facet.get('category')) {
case 'country':
CountriesCollection.select(function(country) {
return country.get('name') == facet.get('value');
});
break;
// etc...
}
});

Any recommendations for deep data structures with Backbone?

I've run into a headache with Backbone. I have a collection of specified records, which have subrecords, for example: surgeons have scheduled procedures, procedures have equipment, some equipment has consumable needs (gasses, liquids, etc). If I have a Backbone collection surgeons, then each surgeon has a model-- but his procedures and equipment and consumables will all be plain ol' Javascript arrays and objects after being unpacked from JSON.
I suppose I could, in the SurgeonsCollection, use the parse() to make new ProcedureCollections, and in turn make new EquipmentCollections, but after a while this is turning into a hairball. To make it sensible server-side there's a single point of contact that takes one surgeon and his stuff as a POST-- so propagating the 'set' on a ConsumableModel automagically to trigger a 'save' down the hierarchy also makes the whole hierarchical approach fuzzy.
Has anyone else encountered a problem like this? How did you solve it?
This can be helpful in you case: https://github.com/PaulUithol/Backbone-relational
You specify the relations 1:1, 1:n, n:n and it will parse the JSON accordingly. It also create a global store to keep track of all records.
So, one way I solved this problem is by doing the following:
Have all models inherit from a custom BaseModel and put the following function in BaseModel:
convertToModel: function(dataType, modelType) {
if (this.get(dataType)) {
var map = { };
map[dataType] = new modelType(this.get(dataType));
this.set(map);
}
}
Override Backbone.sync and at first let the Model serialize as it normally would:
model.set(response, { silent: true });
Then check to see if the model has an onUpdate function:
if (model.onUpdate) {
model.onUpdate();
}
Then, whenever you have a model that you want to generate submodels and subcollections, implement onUpdate in the model with something like this:
onUpdate: function() {
this.convertToModel('nameOfAttribute1', SomeCustomModel1);
this.convertToModel('nameOfAttribute2', SomeCustomModel2);
}
I would separate out the different surgeons, procedures, equipment, etc. as different resources in your web service. If you only need to update the equipment for a particular procedure, you can update that one procedure.
Also, if you didn't always need all the information, I would also lazy-load data as needed, but send down fully-populated objects where needed to increase performance.

Can an object's methods act on itself?

I'm not sure where to put some methods.
Let's say I want to send an email.
Which of the following options should I choose:
email = new Email("title", "adress", "body");
email.send();
or
email = new Email("title", "adress", "body");
Postman.send(email);
Because how can an email send itself? And isn't it better to have a central object that handles all emails because then he can regulate things like sending all emails at a specific time, sort mails, remove mails etc.
Also if I want to delete an user, how should I do:
user.delete();
or
administrator.delete(user);
Please share your thoughts about how to know where to put the methods.
I disagree with Arseny. An email can send itself, and that's exactly where the code should live. That's what methods are: actions that can be performed on the object.
However, note that your approaches are not mutually incompatible. An email's send action could easily just contain the code to add itself to the Postman's send queue, and if you do want to regulate the actions, that might be a good idea. But that's no reason not to have a send method for the email class.
All sensible methods that act on emails should be in the email class, for the convenience of users of your class. But email objects should not contain any fields except those related to the content of the email itself (single responsibility principle).
Therefore, I'd suggest this:
class Email
def email(postman)
postman.send(self)
end
end
In statically typed languages, the type of the postman argument should definitely be an interface.
Use the second method to have a class manager handle the objects (emails or users). This follows the single-responsibility-principle.
In Ruby I'd do this:
email = Email.deliver(recipient, subject, message)
The correspoding class would look something like this:
class Email
def self.deliver(recipient, subject, message)
# Do stuff to send mail
end
end
This is clean and easy to use.
On the delete issue: Delete the object you want to delete. So #user.delete would be best. If you want to register the administrator who deleted the user: #user.delete_by(#admin)
I agree with Daniel.
Following your first example, a lot of common widgets would also have a "collections" manager like you mentioned but they don't necessarily. A Tabs widget can show/hide one of its own tabs, without necessarily specifying a new Tab class for each individual one.
I believe functionality should be encapsulated. The example of deleting a user however, is a slightly different case. Having a delete method on the User class could do stuff like clear its own internal variables, settings, etc, but it won't delete the reference to itself. I find that delete methods are better suited for collection-based classes. I wouldn't per se put the delete method on a admin class but rather on a Users "collection" class.
function Users(){
var users = [];
this.add = function(user){
// add user code
users.push(new User(user));
}
this.remove = function(user){
// remove user code and remove it from array
}
}
I don't quite see how an object can fully add/remove itself so it makes sense to me to have that functionality at the collections level. Besides that though, I would say it should be encapsulated within the class it's meant for.

Categories