I'm currently in the process of architecting a Node.js microservice-based application, and I'd like to make use of Domain Driven Design as a guideline for structuring the individual services. I have a few questions on this as follows:
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
One service in particular will be handling authentication by means of OAuth. Would one typically classify OAuth-related logic as an application service? Or would it fall under the infrastructure layer? My understanding is that it's not infrastructure-related, nor is it related to the core domain, but is still required as part of serving the application to clients. Hence, I've placed it within the application layer for now.
Following on from point 2, where would OAuth-related entities/repositories (i.e. tokens/clients) best be placed? I'm tempted to keep them in the domain layer along with my other entities/repositories, even though they aren't technically a business concern.
Some notes to add to the above:
I'm not keen on using TypeScript (as suggested here).
I am fully aware that some may consider JavaScript as being non-suitable for DDD-type approaches. I'm familiar with some of the pitfalls, and once again, I'm using DDD as a guideline.
On a side note, I have come up with the following project structure, which was heavily inspired by this great example of a DDD-based project in Laravel:
app/
----app.js
----package.json
----lib/
--------app/
------------service/
----------------oauth.js
----------------todo.js
--------domain/
------------list/
----------------model.js
----------------repository.js
----------------service.js
------------task/
----------------model.js
----------------repository.js
----------------service.php
--------http/
------------controller/
----------------list.js
----------------task.js
------------middleware/
----------------auth.js
----------------error.js
--------infrastructure/
------------db.js
------------logger.js
------------email.js
I would love to hear any thoughts you may have on this layout. I'm fully aware that the topic of project structure is somewhat opinion-based, but I'm always keen to hear what others have to say.
Have you considered wolkenkit?
It's a CQRS and event-sourcing framework for Node.js and JavaScript, that works pretty well with domain-driven design (DDD). It might give you a good idea of how to structure your code, and also a runtime to execute things without having to re-invent the wheel.
I know the guys behind it and they invested 3-4 years of thoughts, blood and sweat into this.
Domain-Driven Design guides decomposition of a system into a set of bounded contexts/services/microservices. However, the way you design each service is individual and depends on the service's business domain. For example, your business's core domain services and supporting domain services should be architected in different ways.
Even if this question is quite old, I think it will be useful to add a precision on your first interrogation :
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
JavaScript does not provide interface. Instead of mimicking OOP concepts, why not have a look to more functional ones? Higher-order functions are very well suited for javascript, you can use them to "declare" dependencies and inject them at runtime :
const { FooData } = require('./data');
const getFooOfId = (getFooOfIdImpl = async (fooId) => { throw new Error(`Can't retrieved foo of id ${fooId} : missing implementation`) }) => async fooId => {
try {
const fooData = await getFooOfIdImpl(fooId);
return FooData(fooData);
} catch (err) {
throw new Error(`Unable to retrieve Foo with id ${fooId}`);
}
}
/*
This function is used to build a concrete implementation, for example with an in-memory database :
const inMemoryFooDatabase = {
foo17: {
id: 'foo17',
foo: 'a foo value',
bar: 'a bar value',
foobaz: 1234,
},
};
const getFooOfIdFromInMemoryDatabase = getFooOfId(fooId => inMemoryFooDatabase[fooId])
*/
module.exports = {
getFooOdId,
}
Nothing prevents you to bypass totally this function because there is no strong type-checking in javascript, but it acts as a declaration of your domain "interfaces" needs.
If you want to learn more about this, you can read my post on this subject : Domain-Driven Design for JavaScript developers
Related
To preface, I'm pretty comfortable with Javascript and have done well with Angular up until I saw it implemented in a project with Typescript. Between still climbing the learning curve of Angular and seeing this new way of implementation with TS, it has left me feeling pretty overwhelmed.
I paused the low-level learning of AngularJS and started going over tutorials that used the tech we're currently using: Angular. Typescript, and Entity Framework with WebApi.
I came across this example: https://blogs.msdn.microsoft.com/tess/2015/12/10/3b-services-getting-started-with-angularjs-typescript-and-asp-net-web-api/
For my actual question, I was doing fine up until the author started implementing his service. I was thinking the way I would go about it (using what I had studied and learned previously) and the author hit me with using an interface to describe the service itself:
Create the DataAccessService
Create a new DataAccessService.ts file under app/common/services
Add an Interface describing the DataAccessService
module app.common.services {
interface IDataAccessService {
getStoreResource(): ng.resource.IResourceClass<IStoreResource>;
}
}
Ok, so from my perspective he's created a function that will be called to get the resource he wants but what's the benefit of this implementation, specifically with the interface (IDataAccessService)? Then he goes deeper down the rabbit hole and creates another interface (IStoreResource) to return a resource of another interface (IStores):
The DataAccessService will just give us access to the various API’s we
may be using, in this case a resource for the stores API. We could
also have methods here for getAdResource() or getProductResource() for
example if we had APIs to get ads or products or similar. We tell it
to return a ng.resource.IResourceClass so we’ll have
to define what that is.
Declare the IStoreResource as a resource of IStores… this is to
explain what types of items will be returned from or passed to the
resource. This should go above the interface declaration for
IDataAccessService
interface IStoreResource extends ng.resource.IResource<app.domain.IStore> { }
Pretty lost at this point but it continues:
Next we’re going to implement the DataAccessService below the
IDataAccessService interface
export class DataAccessService implements IDataAccessService {
//minification protection
static $inject = ["$resource"]
constructor(private $resource: ng.resource.IResourceService) { }
getStoreResource(): ng.resource.IResourceClass<IStoreResource> {
return this.$resource("/api/stores/:id");
}
}
I follow this a little better as it's the piece that looks to be doing the actual work. We've got our function that will get our resource API to return the data we need. But it seems to me (and this maybe where my OOP knowledge is lacking) that these handoffs and implementations are just added complexity and I can't see the benefit.
Can someone explain this with a ELI5 or more layman's approach? Thanks!
This is really where TypeScript shines and brings a more classical approach to object-oriented programming with JavaScript. But first, you need to know your SOLID principles, specifically in this case, the Dependency Inversion Principle (as described in toskv's comment), which states:
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
To put this into context, interfaces are used to describe a contract of functionality that an object requires in order to perform a given task.
Concrete objects then implement the interface providing behaviour. The dependency should be exposed via the interface, not the concrete object, since this allows loose coupling and have several advantages such as testability.
I'm trying to refactor my old Colorchess (ChessHighlight) program. It's a chess board aimed to enhance influences of chessmen on each turn to help beginners understanding the game.
According to pressure balance on board at a given turn, tiles are colorized as follows :
green = no pressure
white = white player owns the tile
black = black player owns the tile
a color picked from a gradient green-yellow-orange-red : conflictual situation for this tile
AI's in project but for the moment I focus on making this game playable correctly along devices, in both situations : table gaming on tablet -or- network gaming.
I decide to code the client side in javascript, I love it ! and servers syncing will be in PHP, since my actual hosting environment is under.
My questions and though comes when I try to put all together :
(client-side libraries)
- RequireJS --> loading files
- KnockoutJS --> binding UI events
- ICanHaz --> templating
- Zepto --> DOM manipulating
- and maybe underscoreJS for utilities
I'm worry about making spaghetti code, difficult to understand and maintain.
In the old program, ChessHighlight, there was lots of interlaced construct declarations and prototype extensions, for example :
// file board.js
function Board() { ... }
function Tile() { ... }
// next included file :
function Chessman() { ... }
// again in a third included file
// since board and chessmen are defined
Tile.prototype.put(chessman) { ... }
Tile.prototype.empty() { ... }
due to nature highly-coupled of the game, each file inclusion in the stack carry more and more definitions, and code became messy...
A difficulty is that the game need transactional implementation since I did setters like :
// both... (think ACID commit in a RDBMS)
tile_h8.chessman = rook_white_1;
rook_white_1.tile = tile_h8;
Now I solve -partially- this issue by creating a "Object Relational Pool Manager" which is intended to store :
references to objects of any kind (Board, Chessman, Tile, Influence...)
the relations between objects
and appreciably some type checks and cardinality/arity summing
(I'm baking the code at this time)
SOME QUESTIONS :
How to write extensible code (elegantly, no classes and interface simulation, rather prototypes and closures) in a way that you define basic atoms of code : Tile, Board, Chessman in very short files, and then gluing them together in an other part : adding behavior wirh callbacks ?
NOTE : I try to read game engines code (Crafty, Quintus...) but Core of these engines (1600 lines of code), although they are well documented, are difficult to understand (where is the starting point ? where are runtime flows ??)
UML : I have the feeling that classical methodologies could rapidly fail with closures, callbacks and nested scopes, I seems to be instinctive to write and understand, but drawing dragrams seems to be a trick... what good JS developers use as a safety rope to climb 1500+ line-of-code peaks ?
and the last : I would have an engine API "jquery-like" to plug it easily into computed observables the KnockOut ViewModels of the GUI. Something like this
[code]
var colorchess = new Colorchess( my_VM_for_this_DIV_part );
colorchess.reset( "standard-game" );
colorchess("a1") --> a wrapper for "a1" tile
colorchess("h8").chessman() --> a wrapper for "h8" tile's chessman (rook)
// iterate on black chessman
colorchess("black").each( function( ref, chessman) {})
// a chainable example
colorchess("white").chessman("queen").influences()
[/code]
... but for for moment, I don't know exactly how to model, write and test those kind of mutable objects.
Suggestions are welcome. Thanks for help.
I don't think that defining your objects in constructor functions is bad. Defining them as closures is worse because it'll consume more resources and isn't as easily optimized by JS engines.
Tightly coupled objects is a problem you'll have with closures as well, you can use mediator pattern for that. Mediator can complicate your code but you can easily control the application flow after you set it up.
Having good tools for your JS project is important in big projects, I've tried some; like Google closure compiler with Eclipse, Visual Studio with typescript and now trying Dart (google Dart) with Dart editor (=Eclipse with right plugins). This can help you spot inconsistencies quickly and refactor easier. Typescript would be the only one where it would be easy to use JS libraries because Typescript is JS with optional extensions. Not sure how AngularJS is coming along with the Dart port but that is something worth looking at.
As for ACID, if you're talking about updating the sate of your client side object in a way that it reflect the data in the server's database. You can use promise instead of callbacks. That will improve readability of XHR updates. Write your PHP so you'll have update messages with the desired state of both the piece and the tile so it can change the data in a transaction.
I'm very interested in using the new EntityErrorsException that came with today's release. But the way my colleague implemented the server-side logic might be an issue.
In the webAPI controller, we use our own contextProvider, which inherits from breeze's EFContextProvider. See code below:
public class SepaContextProvider : EFContextProvider<CompanyName.BLL.BLDirectDebitContext>
{
public MyContextProvider() : base() { }
}
As you can see, the generic parameter is a BLDirectDebitContext, which inherits from a DirectDebitContext class defined in the data-access layer:
public class DirectDebitContext : DbContext{
}
This way, the entities are validated in the BLDirectDebitContext class by overriding ValidateEntity() so that if this code is called from a desktop application (that don't use webAPI or even breeze), the validation logic does not have to be re-written.
Ideally, we could create EFEntityError objects here and throw a EntityErrorsException. But that would mean referencing breeze.webapi dll in our business layer, which does not sound so good considering the number of dependencies.
Would it not make more sense to split the breeze.webapi dll into different dll's ? Or is it our approach that does not make any sense ?
We are planning to refactor Breeze.WebApi into at least two or three dll's in the near future. (Sorry no exact date yet). One that includes the core .NET generic code ( with substantially fewer dependencies) and the other that is Entity Framework specific. We plan to release an NHibernate specific dll at the same time, in parallel to the Entity Framework version.
This will, of course, be a breaking change which is why we are trying to get everything organized properly so that we don't have to do this again. Ideally, the conversion from the current structure to the new one will be fairly easy for any Breeze consumers.
On a slightly related issue, did you notice that you can also use standard .NET DataAnnotation Validation attributes as well as the EntityErrorsException. The two mechanisms result in exactly the same client side experience.
I'm currently using a mediator that sits in-between all my modules and allows them to communicate between one another. All modules must go through the mediator to send out messages to anything that's listening. I've been doing some reading on RequireJS but I've not found any documentation how best you facilitate communication between modules.
I've looked at signals but if I understand correctly signals aren't really that useful if you're running things through a mediator. I'm just left wondering what else I could try. I'm quite keen on using a callback pattern of some kind but haven't got past anything more sophisticated than a simple lookup table in the mediator.
Here's the signal implementation I found: https://github.com/millermedeiros/js-signals
Here's something else I found: http://ryanflorence.com/publisher.js/
Is there a standardized approach to this problem or must everything be dependency-driven?
Using a centralized event manager is a fairly common and pretty scalable approach. It's hard to tell from your question what problem, if any, you're having with an events model. The typical thing is as follows (using publisher):
File 1:
require(['publisher','module1'],function(Publisher,Module1) {
var module = new Module1();
Publisher.subscribe('globaleventname', module.handleGlobalEvent, module);
});
File 2:
require(['publisher','module2'],function(Publisher,Module2) {
var module = new Module2();
module.someMethod = function() {
// method code
// when method needs module1 to run its handler
Publisher.publish('globaleventname', 'arguments', 'to', 'eventhandlers');
};
});
The main advantage here is loose coupling; rather than objects knowing methods of other objects, objects can fire events and other objects know how to handle that particular application state. If an object doesn't exist that handles the event, no error is thrown.
What problems are you having with this approach?
Here's something you might want to try out:
https://github.com/naugtur/overlord.js
It can do a bit more than an ordinary publisher or mediator. It allows creating a common API for accessing any methods of any modules.
This is kind of a shameless plug, because it's my own tool, but it seems quite relevant to the question.
Support for require.js has been added.
Can anyone suggest a pattern that can be used for writing a JavaScript API wrapper, where there is no shared code between multiple implementations? The idea is to provide the client consumer with a single wrapping API for one of many possible APIs determined at runtime. APIs calls could be to objects/libraries already in the app environment, or web service calls.
The following bits of pseudo-code are two approaches I've considered:
Monolithic Solution
var apiWrapper = {
init: function() {
// *runtime* context of which API to call
this.context = App.getContext();
},
getName: function() {
switch(context) {
case a:
return a.getDeviceName() // real api call
case b:
return b.deviceName // real api call
etc...
}
// More methods ...
}
}
Pros: Can maintain a consistent API for the library consumers.
Cons: Will result in a huge monolithic library, difficult to maintain.
Module Solution
init.js
// set apiWrapper to the correct implementation determined at runtime
require([App.getContext()], function(api) {
var apiWrapper = api;
});
module_a.js
// Implementation for API A
define(function() {
var module = {
getName: function() {
return deviceA.getDeviceName();
}
};
return module;
});
module_b.js
// Implementation for API B
define(function() {
var module = {
getName: function() {
// could also potentially be a web service call
return deviceB.getName;
}
};
return module;
});
Pros: More maintainable.
Cons: Developers need to take care that API remains consistent. Not particularly DRY.
This would be a case where something along the lines of an Interface would be useful, but as far as I'm aware there's no way to enforce a contract in Js.
Is there a best practice approach for this kind of problem?
what a coincidence, someone is out there doing what i am also doing! recently, i have been delving into JS application patterns and i am exploring the modular pattern.
i started out with this artice which has a lot of links that refer to other JS developers.
it would be better to go modular:
mainly to avoid dependencies between two parts of a website
though one could depend on the other, they are "loosely coupled".
in order to build a site, it should work without breaking when you tear it apart
also, you need to test out parts individually without using everything else
easily swap out underlying libraries (jQuery, dojo, mootools etc.) without actually affecting the existing modules (since you are building our own API)
in cases you need to change/upgrade your API (or when you change the underlying library), you only touch the API "backing" and not the API nor re-code the parts that are using it
here are links i have been through (vids mostly) that convey what the modular approach is all about and why use it:
by nicholas zakas - how to organize the API, libraries and modules
by addi osmani - how and why modules
by Michael Mahemoff - automatic module event (shout/listen) registration
Second approach is better because it is modular and you or a third person can easily extend it to incorporate other services. Point of "API remaining consistent" is not so valid because proper unit-tests you keep things consistent.
Second approach is also future proof because you don't know what unimaginable things you may have to do to implement say getName for service C, in that case it is better to have a separate module_c.js with all complications instead of spaghetti code in monolithic single module.
Need for real-interface IMO is not so important, a documented interface with unit-tests is enough.
I'd go the modular solution. Though there's no built-in way to enforce contracts, you can still decide on one, and then go TDD and build a test suite that tests the modules' interface compliance.
Your test suite then basically takes the role that compiling would in a language with explicit interfaces: If an interface is incorrectly implemented, it'll complain.