Revealing pattern usecase - javascript

I'm reading a book called "Learning javascript design patterns" by Addy Osmani. This book is great.
There's an example of using the revealing design pattern:
var myRevealingModule = (function () {
var privateVar = "Ben Cherry",
publicVar = "Hey there!";
function privateFunction() {
console.log( "Name:" + privateVar );
}
function publicSetName( strName ) {
privateVar = strName;
}
function publicGetName() {
privateFunction();
}
// Reveal public pointers to
// private functions and properties
return {
setName: publicSetName,
greeting: publicVar,
getName: publicGetName
};
})();
myRevealingModule.setName( "Paul Kinlan" );
Now, of course I understand how it works, but my question simpler- what is the use case for using this in a regular, classic node webapp?
let's say I have a car module, that I wish to create in some procedure I have. I can't see how can I use the pattern in this case. How can I pass arguments to make new Car(args)?
Should I use this pattern for singletons? to create factories?

This pattern is used to encapsulate some private state while exposing ("revealing") a public interface.
There can be many use cases for this pattern but in it's core it shows how to separate the implementation (the private variables) from the API (the exposed functions) which is not trivial to achieve in javascript.
It's a good idea to use this whenever you have a module which has a state.
To provide arguments, just expose an API function that accepts arguments, e.g.
return {
createCar: function(model, mileage) { ... }
}

JavaScript is a bit of a strange language, sometimes doing strange things if you don’t follow best practices and if you’re not familiar with the ECMA standard. There are numerous strange things in the JavaScript syntax, one of those being the self-executing (self-invoking) functions...
more details you can read here
http://blog.mgechev.com/2012/08/29/self-invoking-functions-in-javascript-or-immediately-invoked-function-expression/
You can pass arguments like this :
https://jsfiddle.net/6bhn80oz/

I often use it in angular 1 when I create services. For me, it is easier to know which methods and/or variables I can access in the service, and keep everything else outside of the return {} private.
angular.module('myModule')
.factory('someService', function () {
function myFunction(){
}
return{
getSomething:myFunction
}
});

Related

AngularJS 1.8 and ES6 modules: how to make an service or factory that "passes through" a class based API interface?

I am gradually improving a codebase that originally had some AngularJs in various versions and some code that was not in a framework at all using various versions of a software API. (For some reason this API is available - to pages loaded through the application - on AngularJS's $window.external...go figure.)
In my pre-ES6, AngularJs 1.8 phase, I have three services that interact with the software's API (call them someAPIget, someAPIset, and someAPIforms). Something like this:
// someAPIget.service.js
;(function () {
var APIget = function ($window, helperfunctions) {
function someFunc (param) {
// Do something with $window.external.someExternalFunc
return doSomethingWith(param)
}
return {
someFunc: someFunc
}
}
angular.module('someAPIModule').factory('someAPIget', ['$window', 'helperfunctions', someAPIget])
})()
I then had a service and module a level up from this, with someAPIModule as a dependency, that aggregated these functions and passed them through under one name, like this:
// apiinterface.service.js
;(function () {
// Change these lines to switch which API service all functions will use.
var APIget = 'someAPIget'
var APIset = 'someAPIset'
var APIforms = 'someAPIforms'
var APIInterface = function (APIget, APIset, APIforms) {
return {
someFunc: APIget.someFunc,
someSettingFunc: APIset.someSettingFunc,
someFormLoadingFunc: APIforms.someFormLoadingFunc
}
}
angular.module('APIInterface').factory('APIInterface', [APIget, APIset, APIforms, APIInterface])
})()
I would then call these functions in various other controllers and services by using APIInterface.someFunc(etc). It worked fine, and if we switch to a different software provider, we can use our same pages without rewriting everything, just the interface logic.
However, I'm trying to upgrade to Typescript and ES6 so I can use import and export and build some logic accessible via command line, plus prepare for upgrading to Angular 11 or whatever the latest version is when I'm ready to do it. So I rebuilt someAPIget to a class:
// someAPIget.service.ts
export class someAPIget {
private readonly $window
private readonly helperfunctions
static $inject = ['$window', 'helperfunctions']
constructor ($window, helperfunctions) {
this.$window = $window
this.helperfunctions = helperfunctions
}
someFunc (param) {
// Do something with this.$window.external.someExternalFunc
return doSomethingWith(param)
}
}
}
angular
.module('someAPImodule')
.service('someAPIget', ['$window', 'helperfunctions', someAPIget])
Initially it seemed like it worked (my tests still pass, or at least after a bit of cleanup in the Typescript compilation department they do), but then when I load it into the live app... this.$window is not defined. If, however, I use a direct dependency and call someAPIget.someFunc(param) instead of through APIInterface.someFunc(param) it works fine (but I really don't want to rewrite thousands of lines of code using APIInterface for the calls, plus it will moot the whole point of wrapping it in an interface to begin with). I've tried making APIInterface into a class and assigning getters for every function that return the imported function, but $window still isn't defined. Using console.log statements I can see that this.$window is defined inside someFunc itself, and it's defined inside the getter in APIInterface, but from what I can tell when I try to call it using APIInterface it's calling it without first running the constructor on someAPIget, even if I make sure to use $onInit() for the relevant calls.
I feel like I am missing something simple here. Is there some way to properly aggregate and rename these functions to use throughout my program? How do alias them correctly to a post-constructed version?
Edit to add: I have tried with someAPIget as both a factory and a service, and APIInterface as both a factory and a service, and by calling APIInterface in the .run() of the overall app.module.ts file, none of which works. (The last one just changes the location of the undefined error.)
Edit again: I have also tried using static for such a case, which is somewhat obviously wrong, but then at least I get the helpful error highlight in VSCode of Property 'someProp' is used before its initialization.ts(2729).
How exactly are you supposed to use a property that is assigned in the constructor? How can I force AngularJS to execute the constructor before attempting to access the class's members?
I am not at all convinced that I found an optimal or "correct" solution, but I did find one that works, which I'll share here in case it helps anyone else.
I ended up calling each imported function in a class method of the same name on the APIInterface class, something like this:
// apiinterface.service.ts
// Change these lines to switch which API service all functions will use.
const APIget = 'someAPIget'
const APIset = 'someAPIset'
const APIforms = 'someAPIforms'
export class APIInterface {
private readonly APIget
private readonly APIset
private readonly APIforms
constructor (APIget, APIset, APIforms) {
this.APIget = APIget
this.APIset = APIset
this.APIforms = APIforms
}
someFunc(param: string): string {
return this.APIget.someFunc(param)
}
someSettingFunc(param: string): string {
return this.APIset.someSettingFunc(param)
}
someFormLoadingFunc(param: string): string {
return this.APIforms.someFormLoadingFunc(param)
}
}
angular
.module('APIInterface')
.factory('APIInterface', [APIget, APIset, APIforms, APIInterface])
It feels hacky to me, but it does work.
Later Update:
I am now using Angular12, not AngularJS, so some details may be a bit different. Lately I have been looking at using the public-api.ts file that Angular12 generates to accomplish the same thing (ie, export { someAPIget as APIget } from './filename' but have not yet experimented with this, since it would still require either consolidating my functions somehow or rewriting the code that consumes them to use one of three possible solutions. It would be nice not to have to duplicate function signatures and doc strings however. It's still a question I'm trying to answer more effectively, I will update again if I find something that really works.

View Model inheritance when using Durandal

I am building an application using Durandal and I have the need to share some functionality across view models.
I have 5 screens to build and they are all virtually the same screen except that in the activate function they will call to a different api end points but otherwise the view and view models will be identical.
Is there a pattern that I should be following to structure this correctly to promote code reuse?
If the views and the view models are identical except for calling different api actions, what about just taking in a parameter as part of the route? Then in the activate function, you can switch on the parameter. The route values can be designated so that your url is relevant, like [http://site/page/subtype], where subtype is the parameter (instead of using numeric values)
Regarding inheritance, depending on the features you need, there's so many ways to do JavaScript inheritance it can be a little confusing. There are some full-featured inheritance models provided by libraries such as base2 and Prototype. John Resig also has an inheritance model that I've used successfully.
In general, I prefer to stick to simpler solutions when it comes to JS inheritance. If you need a pretty much the full set of inheritance features, those libraries are good to consider. If you only really care about accessing a set of properties and functions from a base class, you might be able to get by with just defining the view model as a function, and replacing the function's prototype with the desired base class. Refer to Mozilla's Developer Docs for good info on inheritance.
Here's a sample:
//viewModelBase
define(function (require) {
"use strict";
function _ctor() {
var baseProperty = "Hello from base";
function baseFunction() {
console.log("Hello from base function");
}
//exports
this.baseProperty = baseProperty;
this.baseFunction = baseFunction;
};
//return an instance of the view model (singleton)
return new _ctor();
});
//view model that inherits from viewModelBase
define(function (require) {
"use strict";
function _ctor() {
var property1 = "my property value";
function activate() {
//add start up logic here, and return true, false, or a promise()
return true;
}
//exports
this.activate = activate;
this.property1 = property1;
};
//set the "base"
var _base = require("viewModelBase");
_ctor.prototype = _base;
_ctor.prototype.constructor = _ctor;
//return an instance of the view model (singleton)
return new _ctor();
});
Keep in mind this example all results in what effectively is a singleton (i.e. you'll only get the same instance, no matter how many times you require() it)
If you want a transient (non-singleton) just return _ctor. Then you'll need to instantiate a new instance after you require() it.
One more note, in general, functions should be defined on the prototype, not within the constructor function itself. See this link for more information on why. Because this example results in only a single instance, it's a moot point, so the functions are inside the constructor for improved readability and also the ability to access the private vars and functions.

Backbone structure for custom objects

Looking at some backbone examples, I see some simple models like this:
var Vehicle = Backbone.Model.extend(
{
summary: function () {
return 'Vehicles move';
}
});
or
Vehicle = (function () {
return Backbone.Model.extend({
defaults: {
},
initialize: {
}
});
})();
Edit: (clarification)
I was wondering if someone could explain the differences between the two ways of defining backbone objects and what's more conventional. I know they don't have the same methods inside, but I'm more interested in how in the first one, they extend the backbone model, and the second one, they wrap it in a closure. I'm not sure if I really grasp what's going on in each and when you would use which pattern. Thanks in advance!
I would consider the first form much more conventional, especially since I don't even see the second form on the main Backbone.js website at all.
To understand how they do the same thing, first notice that Backbone.Model.extend() is a function that also returns a function:
> Backbone.Model.extend()
function () { return parent.apply(this, arguments); }
So the variable Vehicle ends up being set to a function that is a model constructor method either way you look at it. I would consider the second form more indirect and unnecessarily complex, though: it is setting Vehicle to the result of calling a function that, itself, just returns Backbone.Model.extend(), so its just a more convoluted way of saying the same thing.
If all the properties for the model are easy to define, pattern 1 is suggested. However, if any property is complex to implement thus need a "private" helper function which you do not want expose it either in your model or in global object, better to utilize the closure to hide it. that is the pattern 2.
An Example:
Vehicle = (function () {
function helper1() {} //don't want to expose it
function helper2() {}
return Backbone.Model.extend({
defaults: {
},
initialize: {
}
summary: function() {
helper1();
helper1();
}
});
})();

Passing in functions to Backbone.View.extend

Recently I'm having an argument with some co-workers about something that I
find incorrect.
We're using Backbone in a large application and my way to create views is
the 'standard' backbone way :
var MyView = Backbone.View.extend({
className: 'foo',
initialize: function() {
_.bindAll(this, 'render' /* ... more stuff */);
},
render: function() {
/* ... render, usually
using _.template and passing
in this.model.toJSON()... */
return this;
}
});
But someone in the team recently decided to do it this way :
var MyView = Backbone.View.extend( (function() {
/* 'private stuff' */
function bindMethods(view) {
_.bindAll(view, /* ... more stuff */);
};
function render(view) {
/* ... render, usually
using _.template and passing
in view.model.toJSON()... */
};
return {
className: 'foo',
initialize: function() {
bindMethods(this);
render(this);
}
};
}());
That's the idea in pseudo-code .
Having read the BB source and read tutorials, articles I find this to be a
bad practice (for me it makes no sense), but I'd love some feedback from
other Backbone developers/users
Thanks in advance
One benefit I see from using the closure is providing a private scope for variables and functions that you don't want to be accessible from code outside the view.
Even so, I haven't seen many Backbone apps use a closure to define a view/model/collection etc.
Here's an email from Jeremy Ashkenas concerning this issue as well.
Yes, using closures to create instances of objects with private variables is possible in JavaScript. But it's a bad practice, and should be avoided. This has nothing to do with Backbone in particular; it's the nature of OOP in JavaScript.
If you use the closure pattern (also known as the "module" pattern), you're creating a new copy of each function for each instance you create. This completely ignores prototypes, and is terribly inefficient both in terms of speed and especially in terms of memory use. If you make 10,000 models, you'll also have 10,000 copies of each member function. With prototypes (with Backbone.Model.extend), you'll only have a single copy of each member function, even if there are 10,000 instances of the class.
I totally agree with Paul here. Sometimes you may find it necessary to define methods and properties that are private and can't be messed with from outside. I think it depends wether you need this scoping mechanism in your class or not. Mixing both approaches, with respect to the requirements you have for the class wouldn't be so bad, would it?

Dependency Injection with RequireJS

How much can I stretch RequireJS to provide dependency injection for my app? As an example, let's say I have a model that I want to be a singleton. Not a singleton in a self-enforcing getInstance()-type singleton, but a context-enforced singleton (one instance per "context"). I'd like to do something like...
require(['mymodel'], function(mymodel) {
...
}
And have mymodel be an instance of the MyModel class. If I were to do this in multiple modules, I would want mymodel to be the same, shared instance.
I have successfully made this work by making the mymodel module like this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Is this type of usage expected and common or am I abusing RequireJS? Is there a more appropriate way I can perform dependency injection with RequireJS? Thanks for your help. Still trying to grasp this.
This is not actually dependency injection, but instead service location: your other modules request a "class" by a string "key," and get back an instance of it that the "service locator" (in this case RequireJS) has been wired to provide for them.
Dependency injection would involve returning the MyModel constructor, i.e. return MyModel, then in a central composition root injecting an instance of MyModel into other instances. I've put together a sample of how this works here: https://gist.github.com/1274607 (also quoted below)
This way the composition root determines whether to hand out a single instance of MyModel (i.e. make it singleton scoped) or new ones for each class that requires it (instance scoped), or something in between. That logic belongs neither in the definition of MyModel, nor in the classes that ask for an instance of it.
(Side note: although I haven't used it, wire.js is a full-fledged dependency injection container for JavaScript that looks pretty cool.)
You are not necessarily abusing RequireJS by using it as you do, although what you are doing seems a bit roundabout, i.e. declaring a class than returning a new instance of it. Why not just do the following?
define(function () {
var value = 10;
return {
doStuff: function () {
alert(value);
}
};
});
The analogy you might be missing is that modules are equivalent to "namespaces" in most other languages, albeit namespaces you can attach functions and values to. (So more like Python than Java or C#.) They are not equivalent to classes, although as you have shown you can make a module's exports equal to those of a given class instance.
So you can create singletons by attaching functions and values directly to the module, but this is kind of like creating a singleton by using a static class: it is highly inflexible and generally not best practice. However, most people do treat their modules as "static classes," because properly architecting a system for dependency injection requires a lot of thought from the outset that is not really the norm in JavaScript.
Here's https://gist.github.com/1274607 inline:
// EntryPoint.js
define(function () {
return function EntryPoint(model1, model2) {
// stuff
};
});
// Model1.js
define(function () {
return function Model1() {
// stuff
};
});
// Model2.js
define(function () {
return function Model2(helper) {
// stuff
};
});
// Helper.js
define(function () {
return function Helper() {
// stuff
};
});
// composition root, probably your main module
define(function (require) {
var EntryPoint = require("./EntryPoint");
var Model1 = require("./Model1");
var Model2 = require("./Model2");
var Helper = require("./Helper");
var entryPoint = new EntryPoint(new Model1(), new Model2(new Helper()));
entryPoint.start();
});
If you're serious about DI / IOC, you might be interested in wire.js: https://github.com/cujojs/wire
We use a combination of service relocation (like Domenic describes, but using curl.js instead of RequireJS) and DI (using wire.js). Service relocation comes in very handy when using mock objects in test harnesses. DI seems the best choice for most other use cases.
Not a singleton in a self-enforcing getInstance()-type singleton, but
a context-enforced singleton (one instance per "context").
I would recommend it only for static objects. It's perfectly fine to have a static object as a module that you load using in the require/define blocks. You then create a class with only static properties and functions. You then have the equivalent of the Math Object that has constants like PI, E, SQRT and functions like round(), random(), max(), min(). Great for creating Utility classes that can be injected at any time.
Instead of this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Which creates an instance, use the pattern for a static object (one where values are always the same as the Object never gets to be instantiated):
define(function() {
return {
value: 10
};
});
or
define(function() {
var CONSTANT = 10;
return {
value: CONSTANT
};
});
If you want to pass an instance (the result of using a Module that have return new MyModel();), then, within an initialize function, pass a variable that capture the current state / context or pass on the Object that contains information on state / context that your modules needs to know about.

Categories