I'm currently using a mediator that sits in-between all my modules and allows them to communicate between one another. All modules must go through the mediator to send out messages to anything that's listening. I've been doing some reading on RequireJS but I've not found any documentation how best you facilitate communication between modules.
I've looked at signals but if I understand correctly signals aren't really that useful if you're running things through a mediator. I'm just left wondering what else I could try. I'm quite keen on using a callback pattern of some kind but haven't got past anything more sophisticated than a simple lookup table in the mediator.
Here's the signal implementation I found: https://github.com/millermedeiros/js-signals
Here's something else I found: http://ryanflorence.com/publisher.js/
Is there a standardized approach to this problem or must everything be dependency-driven?
Using a centralized event manager is a fairly common and pretty scalable approach. It's hard to tell from your question what problem, if any, you're having with an events model. The typical thing is as follows (using publisher):
File 1:
require(['publisher','module1'],function(Publisher,Module1) {
var module = new Module1();
Publisher.subscribe('globaleventname', module.handleGlobalEvent, module);
});
File 2:
require(['publisher','module2'],function(Publisher,Module2) {
var module = new Module2();
module.someMethod = function() {
// method code
// when method needs module1 to run its handler
Publisher.publish('globaleventname', 'arguments', 'to', 'eventhandlers');
};
});
The main advantage here is loose coupling; rather than objects knowing methods of other objects, objects can fire events and other objects know how to handle that particular application state. If an object doesn't exist that handles the event, no error is thrown.
What problems are you having with this approach?
Here's something you might want to try out:
https://github.com/naugtur/overlord.js
It can do a bit more than an ordinary publisher or mediator. It allows creating a common API for accessing any methods of any modules.
This is kind of a shameless plug, because it's my own tool, but it seems quite relevant to the question.
Support for require.js has been added.
Related
I'm currently in the process of architecting a Node.js microservice-based application, and I'd like to make use of Domain Driven Design as a guideline for structuring the individual services. I have a few questions on this as follows:
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
One service in particular will be handling authentication by means of OAuth. Would one typically classify OAuth-related logic as an application service? Or would it fall under the infrastructure layer? My understanding is that it's not infrastructure-related, nor is it related to the core domain, but is still required as part of serving the application to clients. Hence, I've placed it within the application layer for now.
Following on from point 2, where would OAuth-related entities/repositories (i.e. tokens/clients) best be placed? I'm tempted to keep them in the domain layer along with my other entities/repositories, even though they aren't technically a business concern.
Some notes to add to the above:
I'm not keen on using TypeScript (as suggested here).
I am fully aware that some may consider JavaScript as being non-suitable for DDD-type approaches. I'm familiar with some of the pitfalls, and once again, I'm using DDD as a guideline.
On a side note, I have come up with the following project structure, which was heavily inspired by this great example of a DDD-based project in Laravel:
app/
----app.js
----package.json
----lib/
--------app/
------------service/
----------------oauth.js
----------------todo.js
--------domain/
------------list/
----------------model.js
----------------repository.js
----------------service.js
------------task/
----------------model.js
----------------repository.js
----------------service.php
--------http/
------------controller/
----------------list.js
----------------task.js
------------middleware/
----------------auth.js
----------------error.js
--------infrastructure/
------------db.js
------------logger.js
------------email.js
I would love to hear any thoughts you may have on this layout. I'm fully aware that the topic of project structure is somewhat opinion-based, but I'm always keen to hear what others have to say.
Have you considered wolkenkit?
It's a CQRS and event-sourcing framework for Node.js and JavaScript, that works pretty well with domain-driven design (DDD). It might give you a good idea of how to structure your code, and also a runtime to execute things without having to re-invent the wheel.
I know the guys behind it and they invested 3-4 years of thoughts, blood and sweat into this.
Domain-Driven Design guides decomposition of a system into a set of bounded contexts/services/microservices. However, the way you design each service is individual and depends on the service's business domain. For example, your business's core domain services and supporting domain services should be architected in different ways.
Even if this question is quite old, I think it will be useful to add a precision on your first interrogation :
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
JavaScript does not provide interface. Instead of mimicking OOP concepts, why not have a look to more functional ones? Higher-order functions are very well suited for javascript, you can use them to "declare" dependencies and inject them at runtime :
const { FooData } = require('./data');
const getFooOfId = (getFooOfIdImpl = async (fooId) => { throw new Error(`Can't retrieved foo of id ${fooId} : missing implementation`) }) => async fooId => {
try {
const fooData = await getFooOfIdImpl(fooId);
return FooData(fooData);
} catch (err) {
throw new Error(`Unable to retrieve Foo with id ${fooId}`);
}
}
/*
This function is used to build a concrete implementation, for example with an in-memory database :
const inMemoryFooDatabase = {
foo17: {
id: 'foo17',
foo: 'a foo value',
bar: 'a bar value',
foobaz: 1234,
},
};
const getFooOfIdFromInMemoryDatabase = getFooOfId(fooId => inMemoryFooDatabase[fooId])
*/
module.exports = {
getFooOdId,
}
Nothing prevents you to bypass totally this function because there is no strong type-checking in javascript, but it acts as a declaration of your domain "interfaces" needs.
If you want to learn more about this, you can read my post on this subject : Domain-Driven Design for JavaScript developers
I'm learning Node.js (-awesome-), and I'm toying with the idea of using it to create a next-generation MUD (online text-based game). In such games, there are various commands, skills, spells etc. that can be used to kill bad guys as you run around and explore hundreds of rooms/locations. Generally speaking, these features are pretty static - you can't usually create new spells, or build new rooms. I however would like to create a MUD where the code that defines spells and rooms etc. can be edited by users.
That has some obvious security concerns; a malicious user could for example upload some JS that forks the child process 'rm -r /'. I'm not as concerned with protecting the internals of the game (I'm securing as much as possible, but there's only so much you can do in a language where everything is public); I could always track code changes wiki-style, and punish users who e.g. crash the server, or boost their power over 9000, etc. But I'd like to solidly protect the server's OS.
I've looked into other SO answers to similar questions, and most people suggest running a sandboxed version of Node. This won't work in my situation (at least not well), because I need the user-defined JS to interact with the MUD's engine, which itself needs to interact with the filesystem, system commands, sensitive core modules, etc. etc. Hypothetically all of those transactions could perhaps be JSON-encoded in the engine, sent to the sandboxed process, processed, and returned to the engine via JSON, but that is an expensive endeavour if every single call to get a player's hit points needs to be passed to another process. Not to mention it's synchronous, which I would rather avoid.
So I'm wondering if there's a way to "sandbox" a single Node module. My thought is that such a sandbox would need to simply disable the 'require' function, and all would be bliss. So, since I couldn't find anything on Google/SO, I figured I'd pose the question myself.
Okay, so I thought about it some more today, and I think I have a basic strategy:
var require = function(module) {
throw "Uh-oh, untrusted code tried to load module '" + module + "'";
}
var module = null;
// use similar strategy for anything else susceptible
var loadUntrusted = function() {
eval(code);
}
Essentially, we just use variables in a local scope to hide the Node API from eval'ed code, and run the code. Another point of vulnerability would be objects from the Node API that are passed into untrusted code. If e.g. a buffer was passed to an untrusted object/function, that object/function could work its way up the prototype chain, and replace a key buffer function with its own malicious version. That would make all buffers used for e.g. File IO, or piping system commands, etc., vulnerable to injection.
So, if I'm going to succeed in this, I'll need to partition untrusted objects into their own world - the outside world can call methods on it, but it cannot call methods on the outside world. Anyone can of course feel free to please tell me of any further security vulnerabilities they can think of regarding this strategy.
I am trying to write a reasonably chunky client side web app using Javascript and jQuery. In order to organise my code, I read up on Javascript module systems and decided to go with AMD modules. Currently I'm using curl.js as my module loader, but I'm not particularly wedded to that.
Unfortunately I've now run up against an issue where two of my modules need to be mutually dependent. I was expecting it to Just Work --- but what actually happens is that loading the app just seems to stall half-way through and everything stops, with no error messages.
A quick Google shows practically no mention whatsoever of AMD and mutually recursive modules. Can I actually do this, and if so, how? (Do I need to change to a different module loader?)
If not, any suggestions on an alternative module system which does support mutually recursive modules?
So after realising that an alternative name for 'mutual recursion' is 'circular dependencies', I found some references online (notably the require.js manual page on the topic).
The short summary is: no, this doesn't work. There are various ways to get round it, but it fundamentally does not Just Work.
The simplest workaround is to break the dependency chain using an explicit synchronous require() call:
define(
["require", "NotLoadedYet"],
function (require, NLY)
{
// NLY is undefined here
return {
doSomething: function()
{
var realNLY = require("NotLoadedYet"); // fetch the real NLY on demand
realNLY.doSomething(); // actuall call the method
}
};
}
);
Obviously this only works if you can guarantee that NotLoadedYet really has been loaded by the time you call the method.
The idea of using dynamic late-binding in a language which already does dynamic late-binding is pretty ew, but it does work. Sigh. There seems to be a slightly less ew technique which involves changing to use requirejs's CommonJS support instead, but I don't know how that works so I'm sticking to this.
What I'll actually do is implement a NotLoadedYetImpl module which contains the implementation and a NotLoadedYet module which proxies through the above mechanism. It's a shame Javascript doesn't do getters and setters on all properties for an object, or I could do it all automatically, too...
Can anyone suggest a pattern that can be used for writing a JavaScript API wrapper, where there is no shared code between multiple implementations? The idea is to provide the client consumer with a single wrapping API for one of many possible APIs determined at runtime. APIs calls could be to objects/libraries already in the app environment, or web service calls.
The following bits of pseudo-code are two approaches I've considered:
Monolithic Solution
var apiWrapper = {
init: function() {
// *runtime* context of which API to call
this.context = App.getContext();
},
getName: function() {
switch(context) {
case a:
return a.getDeviceName() // real api call
case b:
return b.deviceName // real api call
etc...
}
// More methods ...
}
}
Pros: Can maintain a consistent API for the library consumers.
Cons: Will result in a huge monolithic library, difficult to maintain.
Module Solution
init.js
// set apiWrapper to the correct implementation determined at runtime
require([App.getContext()], function(api) {
var apiWrapper = api;
});
module_a.js
// Implementation for API A
define(function() {
var module = {
getName: function() {
return deviceA.getDeviceName();
}
};
return module;
});
module_b.js
// Implementation for API B
define(function() {
var module = {
getName: function() {
// could also potentially be a web service call
return deviceB.getName;
}
};
return module;
});
Pros: More maintainable.
Cons: Developers need to take care that API remains consistent. Not particularly DRY.
This would be a case where something along the lines of an Interface would be useful, but as far as I'm aware there's no way to enforce a contract in Js.
Is there a best practice approach for this kind of problem?
what a coincidence, someone is out there doing what i am also doing! recently, i have been delving into JS application patterns and i am exploring the modular pattern.
i started out with this artice which has a lot of links that refer to other JS developers.
it would be better to go modular:
mainly to avoid dependencies between two parts of a website
though one could depend on the other, they are "loosely coupled".
in order to build a site, it should work without breaking when you tear it apart
also, you need to test out parts individually without using everything else
easily swap out underlying libraries (jQuery, dojo, mootools etc.) without actually affecting the existing modules (since you are building our own API)
in cases you need to change/upgrade your API (or when you change the underlying library), you only touch the API "backing" and not the API nor re-code the parts that are using it
here are links i have been through (vids mostly) that convey what the modular approach is all about and why use it:
by nicholas zakas - how to organize the API, libraries and modules
by addi osmani - how and why modules
by Michael Mahemoff - automatic module event (shout/listen) registration
Second approach is better because it is modular and you or a third person can easily extend it to incorporate other services. Point of "API remaining consistent" is not so valid because proper unit-tests you keep things consistent.
Second approach is also future proof because you don't know what unimaginable things you may have to do to implement say getName for service C, in that case it is better to have a separate module_c.js with all complications instead of spaghetti code in monolithic single module.
Need for real-interface IMO is not so important, a documented interface with unit-tests is enough.
I'd go the modular solution. Though there's no built-in way to enforce contracts, you can still decide on one, and then go TDD and build a test suite that tests the modules' interface compliance.
Your test suite then basically takes the role that compiling would in a language with explicit interfaces: If an interface is incorrectly implemented, it'll complain.
I've built a system whereby multiple modules are loaded into an "app.js" file. Each module has a route and schema attached. There will be times when a module will need to request data from another schema. Because I want to keep my code DRY, I want to communicate to another module that I want to request a certain piece of data and receive its response.
I've looked at using the following:
dnode (RPC calls)
Dnode seems more suitable for inter-process communication - I want to isolate these internal messages to within the process.
Faye (Pubsub)
Seems more like something used for inter-process communication, also seems like overkill
EventEmitter
I was advised by someone on #Node.js to stay away from eventEmitter if there are potentially a large amount of modules (and therefore a large amount of subscriptions)
Any suggestions would be very much appreciated!
Dependency injection and invoking other modules directly works.
So either
var m = require("othermodule")
m.doStuff();
Or use a DI library like nCore