Can anyone suggest a pattern that can be used for writing a JavaScript API wrapper, where there is no shared code between multiple implementations? The idea is to provide the client consumer with a single wrapping API for one of many possible APIs determined at runtime. APIs calls could be to objects/libraries already in the app environment, or web service calls.
The following bits of pseudo-code are two approaches I've considered:
Monolithic Solution
var apiWrapper = {
init: function() {
// *runtime* context of which API to call
this.context = App.getContext();
},
getName: function() {
switch(context) {
case a:
return a.getDeviceName() // real api call
case b:
return b.deviceName // real api call
etc...
}
// More methods ...
}
}
Pros: Can maintain a consistent API for the library consumers.
Cons: Will result in a huge monolithic library, difficult to maintain.
Module Solution
init.js
// set apiWrapper to the correct implementation determined at runtime
require([App.getContext()], function(api) {
var apiWrapper = api;
});
module_a.js
// Implementation for API A
define(function() {
var module = {
getName: function() {
return deviceA.getDeviceName();
}
};
return module;
});
module_b.js
// Implementation for API B
define(function() {
var module = {
getName: function() {
// could also potentially be a web service call
return deviceB.getName;
}
};
return module;
});
Pros: More maintainable.
Cons: Developers need to take care that API remains consistent. Not particularly DRY.
This would be a case where something along the lines of an Interface would be useful, but as far as I'm aware there's no way to enforce a contract in Js.
Is there a best practice approach for this kind of problem?
what a coincidence, someone is out there doing what i am also doing! recently, i have been delving into JS application patterns and i am exploring the modular pattern.
i started out with this artice which has a lot of links that refer to other JS developers.
it would be better to go modular:
mainly to avoid dependencies between two parts of a website
though one could depend on the other, they are "loosely coupled".
in order to build a site, it should work without breaking when you tear it apart
also, you need to test out parts individually without using everything else
easily swap out underlying libraries (jQuery, dojo, mootools etc.) without actually affecting the existing modules (since you are building our own API)
in cases you need to change/upgrade your API (or when you change the underlying library), you only touch the API "backing" and not the API nor re-code the parts that are using it
here are links i have been through (vids mostly) that convey what the modular approach is all about and why use it:
by nicholas zakas - how to organize the API, libraries and modules
by addi osmani - how and why modules
by Michael Mahemoff - automatic module event (shout/listen) registration
Second approach is better because it is modular and you or a third person can easily extend it to incorporate other services. Point of "API remaining consistent" is not so valid because proper unit-tests you keep things consistent.
Second approach is also future proof because you don't know what unimaginable things you may have to do to implement say getName for service C, in that case it is better to have a separate module_c.js with all complications instead of spaghetti code in monolithic single module.
Need for real-interface IMO is not so important, a documented interface with unit-tests is enough.
I'd go the modular solution. Though there's no built-in way to enforce contracts, you can still decide on one, and then go TDD and build a test suite that tests the modules' interface compliance.
Your test suite then basically takes the role that compiling would in a language with explicit interfaces: If an interface is incorrectly implemented, it'll complain.
Related
Some time ago I learnt a lot of design-patters, specially creational ones.
After some time, I found myself using a patter that I though existed, but when I tried to search about it I didn't found any reference. Basically it is a small modification of a factory which takes advantages of the closures to resolve dependencies of a class. Actually, it is a factory of classes. Let's me illustrate this with a bit of code:
function Factory(depA,depB){
function MyClass(something){
this.something = something;
}
MyClass.prototype.methodA(){ // use depA}
MyClass.prototype.methodB(){ // use depB}
return MyClass
}
Let's assume that depA is a database and depB is a logger. Then you can use the factory like this
var BindedClass = Factory(database,logger);
var instance = new BindedClass(something);
I find this patter very useful for injecting common dependencies, without loosing the control of how the class is actually instantiated.
However, this has some problems. For example, one of the things that made me realize that this is not so common is that JSDoc does not have support at all for this kind of pattern.
Is it a bad pattern? I can't find any disadvantage on it apart from less modularization and non-working documentation tools.
Yes, it's a totally accepted pattern. However we rarely call it a "factory", more common is just "module" - in this case with dependency injection. Many module systems like AMD use this pattern.
You should however ensure that the factory produces a singleton - i.e. Factory should not be called multiple times and create multiple similar classes.
I'm currently in the process of architecting a Node.js microservice-based application, and I'd like to make use of Domain Driven Design as a guideline for structuring the individual services. I have a few questions on this as follows:
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
One service in particular will be handling authentication by means of OAuth. Would one typically classify OAuth-related logic as an application service? Or would it fall under the infrastructure layer? My understanding is that it's not infrastructure-related, nor is it related to the core domain, but is still required as part of serving the application to clients. Hence, I've placed it within the application layer for now.
Following on from point 2, where would OAuth-related entities/repositories (i.e. tokens/clients) best be placed? I'm tempted to keep them in the domain layer along with my other entities/repositories, even though they aren't technically a business concern.
Some notes to add to the above:
I'm not keen on using TypeScript (as suggested here).
I am fully aware that some may consider JavaScript as being non-suitable for DDD-type approaches. I'm familiar with some of the pitfalls, and once again, I'm using DDD as a guideline.
On a side note, I have come up with the following project structure, which was heavily inspired by this great example of a DDD-based project in Laravel:
app/
----app.js
----package.json
----lib/
--------app/
------------service/
----------------oauth.js
----------------todo.js
--------domain/
------------list/
----------------model.js
----------------repository.js
----------------service.js
------------task/
----------------model.js
----------------repository.js
----------------service.php
--------http/
------------controller/
----------------list.js
----------------task.js
------------middleware/
----------------auth.js
----------------error.js
--------infrastructure/
------------db.js
------------logger.js
------------email.js
I would love to hear any thoughts you may have on this layout. I'm fully aware that the topic of project structure is somewhat opinion-based, but I'm always keen to hear what others have to say.
Have you considered wolkenkit?
It's a CQRS and event-sourcing framework for Node.js and JavaScript, that works pretty well with domain-driven design (DDD). It might give you a good idea of how to structure your code, and also a runtime to execute things without having to re-invent the wheel.
I know the guys behind it and they invested 3-4 years of thoughts, blood and sweat into this.
Domain-Driven Design guides decomposition of a system into a set of bounded contexts/services/microservices. However, the way you design each service is individual and depends on the service's business domain. For example, your business's core domain services and supporting domain services should be architected in different ways.
Even if this question is quite old, I think it will be useful to add a precision on your first interrogation :
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
JavaScript does not provide interface. Instead of mimicking OOP concepts, why not have a look to more functional ones? Higher-order functions are very well suited for javascript, you can use them to "declare" dependencies and inject them at runtime :
const { FooData } = require('./data');
const getFooOfId = (getFooOfIdImpl = async (fooId) => { throw new Error(`Can't retrieved foo of id ${fooId} : missing implementation`) }) => async fooId => {
try {
const fooData = await getFooOfIdImpl(fooId);
return FooData(fooData);
} catch (err) {
throw new Error(`Unable to retrieve Foo with id ${fooId}`);
}
}
/*
This function is used to build a concrete implementation, for example with an in-memory database :
const inMemoryFooDatabase = {
foo17: {
id: 'foo17',
foo: 'a foo value',
bar: 'a bar value',
foobaz: 1234,
},
};
const getFooOfIdFromInMemoryDatabase = getFooOfId(fooId => inMemoryFooDatabase[fooId])
*/
module.exports = {
getFooOdId,
}
Nothing prevents you to bypass totally this function because there is no strong type-checking in javascript, but it acts as a declaration of your domain "interfaces" needs.
If you want to learn more about this, you can read my post on this subject : Domain-Driven Design for JavaScript developers
Coming from a C# background, I used interfaces to base my mock objects off of. I created custom mock objects myself and created the mock implementation off a C# interface.
How do you do something like this in JS or node? Create an interface that you can "mock" off of and also the interface would serve for the real class that would also be able to implement that interface? Does this even make sense in JS or or the Node World?
For example in Java, same deal, define an interface with method stubs and use that as the basis to create real class or mock class off of.
Unfortunately you're not going to find the standard interface as a part of JavaScript. I've never used C#, but I've used Java, and correct me if I'm wrong, but it looks like you're talking about creating interfaces and overriding methods for both mock testing purposes, as well as being able to implement those interfaces in other classes.
Because this isn't a standard JavaScript feature, I think you'll find that there are going to be a lot of very broad answers here. However, to get an idea of how some popular libraries implement this, I might suggest looking at how AngularJS looks at mock testing (there are many resources online, just Google it. As a starting point, look at how they use the ngMock module with Karma and Jasmine.)
Also, because of JavaScript's very flexible nature, you'll find that you can override any sort of "class method" (that is, any function object that is a member of another object, whether that be a new'ed "class" or a plain object) by simply re-implementing it wherever you need to... there's no special syntax for it. To understand where and how you would accomplish this, I'd suggest looking from the ground up at how JavaScript using prototypal/prototypical inheritance. A starting point might be considering an example like this:
function Cat(config) {
if(typeof config !== 'undefined') {
this.meow = config.meow; // where config can possibly implement certain mock methods
}
}
Cat.prototype = {
this.meow = function() {
// the meow that you want to use as part of your "production" code
};
};
var config = {};
config.meow = function() {
// some mock "meow" stuff
};
var testCat = new Cat(config); // this one will use the mock "Cat#meow"
var realCat = new Cat(); // this one will use the prototype "Cat#meow"
In the above example, because of how JavaScript looks up the prototype chain, if it sees an implementation in the class itself, it'll stop there and use that method (thus, you've "overridden" the prototype method). However, in that example, if you don't pass in a config, then it'll look all the way up to the prototype for the Cat#meow method, and use that one.
TL;DR: there's not one good way to implement JavaScript interfaces, especially ones that double as mocks (there's not even a best way to implement dependency injection... that's also a foreign concept to JavaScript itself, even though many libraries do successfully implement it for cetain use-cases.)
There are some useful libraries I want to use in angularjs, e.g. jquery, underscore, underscore.string.
It might not be a good idea to use them directly in angular code(say, controllers, directives), because it's hard to mock and test. So I want to wrap them into angular modules:
angularUnderscore.js
define(['angular', 'underscore'], function(ng, _) {
return ng.module('3rd-libraries')
.service('underscoreService', function() {
return _;
});
});
My questions are:
Is it good to use .service() to define a service? Or is a factory or constant better?
Is it good to use underscoreService, or just underscore is enough and better?
I believe it is really a question of scope.
Although some will disagree, I think that loading _underscore as a dependency of every tests suite is just fine. The reason for that is my rule of thumb saying any "static" operation - that is - any generic algorithm used that is not application logic or data sensitive, should be tested separately (or not at all in case of _underscope like frameworks).
This makes the tests simpler to write, more readable and maintainable and putting rare cases aside, these tests will probably fail anyway if _underscore will have a new bug on sorting an array. Moreover, I can't see you benefitting (other the mocking, which I addressed before) DI of these algorithm.
However, if an algorithm is more complex and involves data logic dependency, I would definitely introduce a factory (or a service, both are singletons) just for encapsulating this logic and making it testable by itself.
As far as service vs factory (vs provider), there are probably a tons of answers out there, I personally liked: This
I'm currently using a mediator that sits in-between all my modules and allows them to communicate between one another. All modules must go through the mediator to send out messages to anything that's listening. I've been doing some reading on RequireJS but I've not found any documentation how best you facilitate communication between modules.
I've looked at signals but if I understand correctly signals aren't really that useful if you're running things through a mediator. I'm just left wondering what else I could try. I'm quite keen on using a callback pattern of some kind but haven't got past anything more sophisticated than a simple lookup table in the mediator.
Here's the signal implementation I found: https://github.com/millermedeiros/js-signals
Here's something else I found: http://ryanflorence.com/publisher.js/
Is there a standardized approach to this problem or must everything be dependency-driven?
Using a centralized event manager is a fairly common and pretty scalable approach. It's hard to tell from your question what problem, if any, you're having with an events model. The typical thing is as follows (using publisher):
File 1:
require(['publisher','module1'],function(Publisher,Module1) {
var module = new Module1();
Publisher.subscribe('globaleventname', module.handleGlobalEvent, module);
});
File 2:
require(['publisher','module2'],function(Publisher,Module2) {
var module = new Module2();
module.someMethod = function() {
// method code
// when method needs module1 to run its handler
Publisher.publish('globaleventname', 'arguments', 'to', 'eventhandlers');
};
});
The main advantage here is loose coupling; rather than objects knowing methods of other objects, objects can fire events and other objects know how to handle that particular application state. If an object doesn't exist that handles the event, no error is thrown.
What problems are you having with this approach?
Here's something you might want to try out:
https://github.com/naugtur/overlord.js
It can do a bit more than an ordinary publisher or mediator. It allows creating a common API for accessing any methods of any modules.
This is kind of a shameless plug, because it's my own tool, but it seems quite relevant to the question.
Support for require.js has been added.