I've seen the revealing module pattern, as well as the prototype class pattern. I kind of adapted the two together into a revealing class pattern. I'm trying to figure out if there are any gotchas with this pattern, or things that wouldn't work. As long as the constructor for the object is typeof "function" I would think there wouldn't be any problems.
The only caveat I can come up with, is a high volume instantiated class, where the private functions are getting create every time a new class is created. This may cause memory problems for thousands of instantiations. For larger classes that don't build up in memory, though, my thinking is the easier code readability and security may be of benefit. By keeping functions private and revealing them, the obfuscator minimizes all of the internal functions, making it difficult for prying eyes. Granted, it is not foolproof, but just an extra layer of protection. Additionally, being able to work inside of the function privately, removes hundreds of occurrences of "this" within the class. Maybe it's a small advantage, but for complex code it kind of improves the readability. Anybody see any major problems with the pattern?
//standard pattern
var MyClass = function() {
this.func1 = function() {
//dostuff
};
};
MyClass.prototype.func2 = function() {
this.func1();
//dostuff
};
-
//revealing class pattern
var MyClass = function() {
function privateFunc() {
//dostuff
}
function publicFunc() {
privateFunc();
//dostuff
}
this.publicFunc = publicFunc;
};
I've seen the revealing module pattern, as well as the prototype class pattern. I kind of adapted the two together into a revealing class pattern.
You should have a look at the revealing prototype pattern which truly combines them to reveal a class. What you currently have should rather be called "revealing instance pattern".
Anybody see any major problems with the pattern?
You've stated the major caveat already, but if you consider it worth then go for it. Given that most classes are not instantiated in very large volumes this doesn't really make a difference any more.
However, you could improve your pattern. Given that you don't use the prototype any more, there's no reason to keep it. Drop the new operator, drop using the this keyword in the constructor, and return an object literal. Voilá, you've got a factory function!
Related
I've been using this pattern in my JS code:
function Thing() {
var otherData = {
// Private variables?
name : "something"
}
var myThing = {
data: "somedata",
someFunction: function () {
console.log(otherData.name);
}
}
return myThing;
}
Then when using it doing:
var thing = Thing();
thing.someFunction();
I've seen examples of constructors and singletons in JS, but I haven't run across this pattern before. Is there a name for this pattern? Are there any potential problems with this pattern? Previously I was just using the object literal pattern, but wanted to get private-ish variables by putting it in a closure.
Is there a name for this pattern?
That has various names. The common ones I've heard for it are:
factory function
maker function (this is the term Douglas Crockford, who popularized them, uses)
builder function
To avoid confusion we don't call them "constructor functions" or "constructors" as that term is specifically for functions used with new, which yours isn't.
(Note: "builder function" in this context is not related to the builder pattern [e.g., GoF patterns]. That's a completely different thing. Similarly, "factory function" here isn't really related to the factory pattern, but in this case there's overlap, as the factory pattern uses factory functions. "Maker" has the advantage of not having that potential confusion; I'm sure there's some "maker pattern" somewhere, but at least not in the initial GoF book.)
Are there any potential problems with this pattern?
Well, there are potential problems with all patterns. :-) There's nothing particularly problematic with this one, though, no.
Just to point out one problem it doesn't have: Someone might mention that you aren't leveraging prototypes with that pattern, but perhaps you just don't need one with that particular builder, and if you did, you could easily use one:
var thingProto = {
method: function() {
// I'm a shared method
}
};
function buildThing() {
var otherData = {
// Private variables?
name : "something"
}
var myThing = Object.create(thingProto);
myThing.data = "somedata";
myThing.someFunction = function () {
console.log(otherData.name);
};
return myThing;
}
Doesn't change the pattern.
As a style note, normally you wouldn't capitalize it, as the overwhelming convention in JavaScript is that a capitalized function is a constructor function. So Thing might be called createThing or buildThing or just thing.
The constructor invocation pattern is when you prefix a function call with the new keyword followed by a space. This allows the function to act as a constructor, assigning its prototype property (ConstructorFunction.prototype) as the prototype attribute (instanceObj.__proto__) of the returned object, and modifying the object's properties through the this keyword. Therefore, the way you're using that function is not a "constructor pattern".
The encapsulation of data using function closures is commonly referred to as the Module Pattern.
Here is the relevant section from Addy Osmani's book: Learning JavaScript Design Patterns.
The Module pattern was originally defined as a way to provide both
private and public encapsulation for classes in conventional software
engineering.
In JavaScript, the Module pattern is used to further emulate the
concept of classes in such a way that we're able to include both
public/private methods and variables inside a single object, thus
shielding particular parts from the global scope. What this results in
is a reduction in the likelihood of our function names conflicting
with other functions defined in additional scripts on the page.
Privacy
The Module pattern encapsulates "privacy", state and organization
using closures. It provides a way of wrapping a mix of public and
private methods and variables, protecting pieces from leaking into the
global scope and accidentally colliding with another developer's
interface. With this pattern, only a public API is returned, keeping
everything else within the closure private.
This gives us a clean solution for shielding logic doing the heavy
lifting whilst only exposing an interface we wish other parts of our
application to use. The pattern is quite similar to an
immediately-invoked functional expression (IIFE - see the section on
namespacing patterns for more on this) except that an object is
returned rather than a function.
It should be noted that there isn't really an explicitly true sense of
"privacy" inside JavaScript because unlike some traditional languages,
it doesn't have access modifiers. Variables can't technically be
declared as being public nor private and so we use function scope to
simulate this concept. Within the Module pattern, variables or methods
declared are only available inside the module itself thanks to
closure. Variables or methods defined within the returning object
however are available to everyone.
For the longest time i have been doing JavaScript classes by doing the following
function MyClass(){
this.v1;
this.foo = function(){
return true;
}
};
Then i found TypeScript and it looks like it compile its Classes down to
var MyClass = (function () {
function MyClass(message) {
this.v1 = message;
}
MyClass.prototype.foo = function () {
return "Hello, " + this.v1;
};
return MyClass;
})();
Another Method i have found a lot on the web is
var MyClass = {
v1:1,
foo: function(){
return true;
}
};
This looks ugly to me but maybe i'm missing something beneficial with this method, as it looks to be the way most people on the web do objects in javaScript.
The first method i'm able to do inharitance from a function i made.
Function.prototype.extends = function(parent){
this.prototype = new parent();
this.prototype.constructor = this;
};
I have also seen many other methods on making classes in JavaScript. I would like to know if my method is wrong or if there is a best practice method in doing OOP. All of my projects I have done use the first example. You can see on https://github.com/Patrick-W-McMahon now that I see what JavaScript compilers are doing i'm starting to question my methods. I would like to know what other JavaScript programmers advise is the best method and if there is a difference between the methods. I came from a C++/Java background and as such I write my JavaScript to match my background.
What you read here is called the power constructor, more about it here:
Douglas Crockford about Inheritance
As a multi-paradigm language JavaScript lends itself very well to these different considerations. The truth is, it molds around your brain the way you can conceive it.
In other words, if you can reason about your structure in one way, than use it that way.
If you, on the other hand, want to be completely "free" of your own constraints, then you should abandon reasoning in classes altogether and embrace prototypal inheritance without any thing that resembles a class at all. Duck typing is then a natural consequence and in the extreme you'd use closures all over the place.
What I often do is this:
function myItemBuilder(p1)
{
var s0, p0 = 0;
// s0 is a shared private property (much like static in classes)
// p0 is a shared private property but will be factorized (rendered non-shared)
// p1 is a private instance property
return (function (p0) { // factorize p0 if necessary
return {
publicProperty : 3,
method1 : function (arg1) {
// code here (may use publicProperty, arg1, s0, p0 (factorized) and p1)
},
method2 : function (arg2) {
// code here (may use publicProperty, arg2, s0, p0 (factorized) and p1)
}
};
}(p0++)); // on each invocation p0 will be different and method1/method2 will not interfere across invocations (while s0 is shared across invocations)
}
EDIT: The internal closure is only required if you need to factorize p0 (i.e. have separate, independent values for p0 on each invocation). Of course, if you don't need p0 factorized omit the internal closure.
The above example is intentionally more complex than required to illustrate various interesting cases.
Invoking this method like myItemBuilder("hello") builds a new item that has those specific features but there really is no class construct per se.
This is an especially powerful way of getting instances when you want to abandon classical inheritance. For example in C++ you can inherit from more than one class, which is called a mix-in. In Java and C# you only have single-inheritance but interfaces come to your help.
Here, what I've shown above, is the assembly line metaphor which can assemble components into an instance, with the result that there is no concrete (1) difference between class, interface and mix-in. They are all just instances that have features you can reflect upon through meta-programming (duck typing, reflection/inspection). There still is a logical (2) difference in that: (1) concrete instances behave the same independently of the way they come to being but (2) logically an interface is a contract while an instance is an implementation.
If you really understand SOLID and the Assembly Line Metaphor all of this makes sense. Otherwise I beg your pardon for this long answer :)
ADDED:
With this approach you can't check types, but you don't need to because duck typing allows you to find the method you need without having to look at a class or interface contract. This is similar to having each method in a separate single-method interface.
This is explained here
http://www.2ality.com/2011/06/coffeescript-classes.html
A normal JavaScript class definition, wrapped in an IIFE [3]. There is no benefit for using an IIFE here, apart from having a single assignment (which would matter in an object literal, but doesn’t here).
its a matter of personal preference. Using this method
var MyClass = {
v1:1,
foo: function(){
return true;
}
};
results in fewer characters and thus smaller JS files so it is the method I usually go with but there is no wrong or right way. Btw. since you commented about this sort of object construction to be ugly you may want to stick with what you are used to. However if you are feeling adventurous check out this project.
It makes oop in javascript quick and painless and MUCH less ugly. Plus it supports multiple inheritance correctly.
I'm having problem with private variables in JavaScript's Revealing Prototype Pattern. I can't figure out how can I have private variables that are used in several different functions inside shared (singleton) prototype, without exposing them. Here is the example of what I mean in JSFiddle.
The problem is in using var v versus this.v. First one messes the state of all instances, but second is publicly visible. Is there a way to have v private, and preserve its state for each individual instance?
There is not a way to do that with the revealing prototype pattern.
You can only do that with something like this:
function MyClass() {
var v = 1;
this.getV = function() {
return v;
};
}
And that's why there are some die-hard enthusiasts for this kind of approach.
Personal option: Stick an underscore in front of it, and putting it on the object: this._v. Don't fight JavaScript; use it.
I have been re-factoring someone else's JavaScript code.
BEFORE:
function SomeObj(flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version="1.1 (prototype)";
if (!this._someProperty) this._init();
// leading underscore hints at what should be a 'private' to me
this.reset(); // assumes reset has been added...
}
SomeObj.prototype.reset = function() {
/* perform some actions */
}
/* UPDATE */
SomeObj.prototype.getPrivate = function() {
return _private; // will return undefined
}
/* ...several other functions appended via `prototype`...*/
AFTER:
var SomeObj = function (flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version = "2.0 (constructor)";
this.reset = function () {
/* perform some actions */
};
/* UPDATE */
this.getPrivate = function() {
return _private; // will return true
}
/* other functions and function calls here */
}
For me the first example looks difficult to read, especially in a larger context. Adding methods like reset on like this, using the prototype property, seems much less controlled as it can presumably happen anywhere in the script. My refactored code (the second example above) looks much neater to me and is therefore easier to read because it's self-contained. I've gained some privacy with the variable declarations but I've lost the possibilities the prototype chain.
...
QUESTIONS:
Firstly, I'm interested to know what else I have lost by foregoing prototype, or if there are larger implications to the loss of the prototype chain. This article is 6 years old but claims that using the prototype property is much more efficient on a large scale than closure patterns.
Both the examples above would still be instantiated by a new operator; they are both 'classical'-ish constructors. Eventually I'd even like to move away from this into a model where all the properties and functions are declared as vars and I have one method which I expose that's capable of returning an object opening up all the properties and methods I need, which have privileges (by virtue of closure) to those that are private. Something like this:
var SomeObj = (function () {
/* all the stuff mentioned above, declared as 'private' `var`s */
/* UPDATE */
var getPrivate = function () {
return private;
}
var expose = function (flag) {
// just returns `flag` for now
// but could expose other properties
return {
flag: flag || false, // flag from argument, or default value
getPrivate: getPrivate
}
};
return {
expose: expose
}
})(); // IIFE
// instead of having to write `var whatever = new SomeObj(true);` use...
var whatever = SomeObj.expose();
There are a few answers on StackOverflow addressing the 'prototype vs. closure' question (here and here, for example). But, as with the prototype property, I'm interested in what a move towards this and away from the new operator means for the efficiency of my code and for any loss of possibility (e.g. instanceof is lost). If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing the new operator?
A looser question if I'm permitted, given that I'm asking for specifics above: if prototype and new really are the most efficient way to go, with more advantages (whatever you think they might be) than closure, are there any guidelines or design patterns for writing them in a neater fashion?
...
UPDATE:
Note that expose returns a new object each time, so this is where the instantiation happens. As I understand this, where that object refers to methods declared in the SomeObj closure, they are the same methods across all objects (unless overwritten). In the case of the flag variable (which I've now corrected), this can be inherited from the argument of expose, have a default value, or again refer back to a encapsulated pre-existing method or property. So there are instances of objects being produced and there is some inheritance (plus polymorphism?) going on here.
So to repeat question 2: If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing the new operator?
Many thanks for answers so far, which have helped to clarify my question.
In my experience, the only thing you lose by not using .prototype is memory - each object ends up owning its own copy of the function objects defined therein.
If you only intend instantiating "small" numbers of objects this is not likely to be a big problem.
Regarding your specific questions:
The second comment on that linked article is highly relevant. The author's benchmark is wrong - it's testing the overhead of running a constructor that also declares four inner functions. It's not testing the subsequent performance of those functions.
Your "closure and expose" code sample is not OO, it's just a namespace with some enclosed private variables. Since it doesn't use new it's no use if you ever hope to instantiate objects from it.
I can't answer this - "it depends" is as good an answer as you can get for this.
Answers:
You already answer this question: you loose the prototype chain. (Actually you don't loose it, but your prototype will be always empty). The consequences are:
There is a little performance/memory impact, because methods are created for each instance . But it depends a lot on the JavaScript engine, and you should worry about it only if you need to create a big amount of objects.
You can't monkey patch instances by modifying the prototype. Not a big issue either, since doing that leads to a maintenance nightmare.
Let me do a small pedantic correction to your question: Is not a matter of "prototype vs closure", in fact the concept of closure is orthogonal to a prototype based language.
The question is related on how you are going to create objects: define an new object from zero each time, or clone it from a prototype.
The example that you show about using functions to limit the scope, is a usual practice in JavaScript, and you can continue doing that even if you decide to use prototypes. For example:
var SomeObj = (function (flag) {
/* all the stuff mentioned above, declared as 'private' `var`s */
var MyObj = function() {}
MyObj.prototype = {
flag: flag,
reset: reset
};
return {
expose: function() { return new MyObj(); }
}
})();
If you are worried about modularization, take a look into requirejs which is an implementation of a technique called AMD (async module definition). Some people doesn't like AMD and some people love it. My experience with it was positive: it helped me a lot to create a modular JavaScript app for the browser.
There are some libraries to make your life with prototypes easier: composejs, dejavu, and my own barman (yes is a shameless self promotion, but you can look into the source code to see ways of dealing with definitions of objects).
About patterns: Since you can easily hide object instantiation using factory methods, you can still use new to clone a prototype internally.
What else I have lost by foregoing prototype?
I'm sure someone can provide an answer, but I'll at least give it a shot. There are at least two reasons to use prototype:
prototype methods can be used statically
They are created only once
Creating a method as an object member means that it is created for every instance of the object. That's more memory per object, and it slows down object creation (hence your efficiency). People tend to say that prototype methods are like class methods whereas member methods are like object methods, but this is very misleading since methods in the prototype chain can still use the object instance.
You can define the prototype as an object itself, so you may like the syntax better (but it's not all that different):
SomeObj.prototype = {
method1: function () {},
method2: function () {}
}
Your argument that it seems less controlled is valid to me. I get that it is weird to have two blocks involved in creating an object. However, it's a bit spurious in that there is nothing stopping someone from overwriting the prototype of your other object anyway.
//Your code
var SomeObj = function (flag) { //...
//Sneaky person's code
delete SomeObj.reset;
SomeObj.prototype.reset = function () { /* what now? */ }
Foregoing new
If you're only going to be creating specific object instances on the fly via {} notation, it's not really different from using new anyway. You would need to use new to create multiple instances of the same object from a class (function) definition. This is not unusual as it applies to any object oriented programming language, and it has to do with reuse.
For your current application, this may work great. However, if you came up with some awesome plugin that was reusable across contexts, it could get annoying to have to rewrite it a lot. I think that you are looking for something like require.js, which allows you to define "modules" that you can import with the require function. You define a module within a define function closure, so you get to keep the constructor and prototype definitions wrapped together anyway, and no one else can touch them until they've imported that module.
Advantages of closure vs. prototype
They are not mutually exclusive:
var attachTo = {};
;(function (attachTo, window, document, undefined) {
Plugin = function () { /* constructor */ };
Plugin.prototype = { /* prototype methods */ };
attachTo.plugin = Plugin;
})(attachTo, window, document);
var plugin = new (attachTo.plugin);
http://jsfiddle.net/ExplosionPIlls/HPjV7/1/
Question by question:
Basically, having the reset method in the prototype means that all instances of your constructor will share the exact same copy of the method. By creating a local method inside the constructor, you'll have one copy of the method per instance, which will consume more memory (this may become a problem if you have a lot of instances). Other than that, both versions are identical; changing function SomeObj to var SomeObj = function only differs on how SomeObj is hoisted on its parent scope. You said you "gained some privacy with the variable declarations", but I didn't see any private variables there...
With the IIFE approach you mentioned, you'll lose the ability to check if instance instanceof SomeObj.
Not sure if this answers your question, but there is also Object.create, where you can still set the prototype, but get rid of the new keyword. You lose the ability to have constructors, though.
I have been doing research to come up with a standardized Javascript coding style for my team. Most resources now recommend the "Module" pattern that involves closures, such as this:
var Module = function() {
someMethod = function() { /* ... */ };
return {
someMethod: someMethod
};
}();
and invoke it like Module.someMethod();. This approach seems to only work with methods that would be static in a traditional OOP context, for example repository classes to fetch/save data, service layers to make outside requests, and the like. Unless I missed something, the module pattern isn't intended to be used with data classes (think DTOs) that would typically need to be passed to/from the service methods to the UI glue code.
A common benefit I see cited is that you can have true private methods and fields in Javascript with the module pattern, but this can also be achieved along with being able to have static or instance methods with the "classical" Javascript style similar to this:
myClass = function(param) {
// this is completely public
this.publicProperty = 'Foo';
// this is completely private
var privateProp = param;
// this function can access the private fields
// AND can be called publicly; best of both?
this.someMethod = function() {
return privateProp;
};
// this function is private. FOR INTERNAL USE ONLY
function privateMethod() {
/* ... */
};
}
// this method is static and doesn't require an instance
myClass.staticMethod = function() { /* ... */ };
// this method requires an instance and is the "public API"
myClass.prototype.instanceMethod = function() { /* ... */ };
So I guess my question is what makes the Module Pattern better than the traditional style? It's a bit cleaner, but that seems to be the only benefit that is immediately apparent; in fact, the traditional style seems to offer the ability to provide real encapsulation (similar to true OOP languages like Java or C#) instead of simply returning a collection of static-only methods.
Is there something I'm missing?
Module Pattern can be used to create prototypes as well see:
var Module = function() {
function Module() {};
Module.prototype.whatever = function() {};
return Module
}();
var m = new Module();
m.whatever();
As the other poster said the clean global namespace is the reason for it. Another way to achieve this however is to use the AMD pattern, which also solves other issues like dependency management. It also wraps everything in a closure of sorts. Here's a great Introduction to AMD which stands for Asynchronous Module Definition.
I also recommend reading JavaScript Patterns as it thoroughly covers the reasons for various module patterns.
The module pattern above is pointless. All you are doing is using one closure to return a constructor with prototype. You could have achieved the same with:
function Module() {};
Module.prototype.whatever = function() {};
var m = new Module();
m.whatever();
In fact you would have saved one object (the closure) from being created, with the same output.
My other beef with the module pattern is it if you are using it for private encapsulation, you can only use it easily with singletons and not concrete classes. To create concrete classes with private data you end up wrapping two closers, which gets ugly. I also agree that being able to debug the underscore pseudo-private properties is a lot easier when they are visible. The whole notion of "what if someone uses your class incorrectly" is never justified. Make a clean public API, document it, and if people don't follow it correctly, then you have a poor programmer in your team. The amount of effort required in Javascript to hide variables (which can be discovered with eval in Firefox) is not worth the typical use of JS, even for medium to large projects. People don't snoop around your objects to learn them, they read your docs. If your docs are good (example using JSDoc) then they will stick to it, just like we use every 3rd party library we need. We don't "tamper" with jQuery or YUI, we just trust and use the public API without caring too much how or what it uses underneath.
Another benefit you forgot to mention to the module pattern, is that it promotes less clutter in the global namespace, ie, what people refer to when they say "don't pollute the global namespace". Sure you can do that in your classical approach, but it seems creating objects outside of anonymous functions would lend more easily to creating those objects in the global namespace.
In other words, the module pattern promotes self contained code. A module will typically push one object onto the global namespace, and all interaction with the module goes through this object. It's like a "main" method.
Less clutter in the global namespace is good, because it reduces the chances of collisions with other frameworks and javascript code.