For the longest time i have been doing JavaScript classes by doing the following
function MyClass(){
this.v1;
this.foo = function(){
return true;
}
};
Then i found TypeScript and it looks like it compile its Classes down to
var MyClass = (function () {
function MyClass(message) {
this.v1 = message;
}
MyClass.prototype.foo = function () {
return "Hello, " + this.v1;
};
return MyClass;
})();
Another Method i have found a lot on the web is
var MyClass = {
v1:1,
foo: function(){
return true;
}
};
This looks ugly to me but maybe i'm missing something beneficial with this method, as it looks to be the way most people on the web do objects in javaScript.
The first method i'm able to do inharitance from a function i made.
Function.prototype.extends = function(parent){
this.prototype = new parent();
this.prototype.constructor = this;
};
I have also seen many other methods on making classes in JavaScript. I would like to know if my method is wrong or if there is a best practice method in doing OOP. All of my projects I have done use the first example. You can see on https://github.com/Patrick-W-McMahon now that I see what JavaScript compilers are doing i'm starting to question my methods. I would like to know what other JavaScript programmers advise is the best method and if there is a difference between the methods. I came from a C++/Java background and as such I write my JavaScript to match my background.
What you read here is called the power constructor, more about it here:
Douglas Crockford about Inheritance
As a multi-paradigm language JavaScript lends itself very well to these different considerations. The truth is, it molds around your brain the way you can conceive it.
In other words, if you can reason about your structure in one way, than use it that way.
If you, on the other hand, want to be completely "free" of your own constraints, then you should abandon reasoning in classes altogether and embrace prototypal inheritance without any thing that resembles a class at all. Duck typing is then a natural consequence and in the extreme you'd use closures all over the place.
What I often do is this:
function myItemBuilder(p1)
{
var s0, p0 = 0;
// s0 is a shared private property (much like static in classes)
// p0 is a shared private property but will be factorized (rendered non-shared)
// p1 is a private instance property
return (function (p0) { // factorize p0 if necessary
return {
publicProperty : 3,
method1 : function (arg1) {
// code here (may use publicProperty, arg1, s0, p0 (factorized) and p1)
},
method2 : function (arg2) {
// code here (may use publicProperty, arg2, s0, p0 (factorized) and p1)
}
};
}(p0++)); // on each invocation p0 will be different and method1/method2 will not interfere across invocations (while s0 is shared across invocations)
}
EDIT: The internal closure is only required if you need to factorize p0 (i.e. have separate, independent values for p0 on each invocation). Of course, if you don't need p0 factorized omit the internal closure.
The above example is intentionally more complex than required to illustrate various interesting cases.
Invoking this method like myItemBuilder("hello") builds a new item that has those specific features but there really is no class construct per se.
This is an especially powerful way of getting instances when you want to abandon classical inheritance. For example in C++ you can inherit from more than one class, which is called a mix-in. In Java and C# you only have single-inheritance but interfaces come to your help.
Here, what I've shown above, is the assembly line metaphor which can assemble components into an instance, with the result that there is no concrete (1) difference between class, interface and mix-in. They are all just instances that have features you can reflect upon through meta-programming (duck typing, reflection/inspection). There still is a logical (2) difference in that: (1) concrete instances behave the same independently of the way they come to being but (2) logically an interface is a contract while an instance is an implementation.
If you really understand SOLID and the Assembly Line Metaphor all of this makes sense. Otherwise I beg your pardon for this long answer :)
ADDED:
With this approach you can't check types, but you don't need to because duck typing allows you to find the method you need without having to look at a class or interface contract. This is similar to having each method in a separate single-method interface.
This is explained here
http://www.2ality.com/2011/06/coffeescript-classes.html
A normal JavaScript class definition, wrapped in an IIFE [3]. There is no benefit for using an IIFE here, apart from having a single assignment (which would matter in an object literal, but doesn’t here).
its a matter of personal preference. Using this method
var MyClass = {
v1:1,
foo: function(){
return true;
}
};
results in fewer characters and thus smaller JS files so it is the method I usually go with but there is no wrong or right way. Btw. since you commented about this sort of object construction to be ugly you may want to stick with what you are used to. However if you are feeling adventurous check out this project.
It makes oop in javascript quick and painless and MUCH less ugly. Plus it supports multiple inheritance correctly.
Related
I've recently found an interest in the factory function pattern in JavaScript. Classes may be pretty clean, but I've found they've got their fair share of issues. (I'm not going to even talk about them because that isn't at all the purpose of this question.)
There is one major aspect of factory functions I'm not clear on however.
Correct me if I'm wrong, but in a JavaScript class my methods are placed on the resulting object's prototype so they are only ever created once. This means the constructor function used internally by the class won't be adding the methods as properties to every single new object, meaning conceptually that the methods belong to the class and not the object instance.
class Example {
constructor(message) {
this.message = message;
}
sayMessage() {
console.log(this.message);
}
}
let a = new Example("Hello!");
a.sayMessage(); // Outputs "Hello!"
console.log(Object.getOwnPropertyNames(a));
// Outputs ["message"]
Example.prototype.sayMessage = function() {
console.log(this.message + " Modified!");
};
let b = new Example("Hello!");
b.message = "Goodbye!";
b.sayMessage(); // Outputs "Goodbye! Modified!"
In the class above, sayMessage is the same function for every instance of the class. As demonstrated, it can even be changed on the prototype later, updating all existing instances of the class. I can't say for sure whether I think this is good thing, but it certainly makes sense.
However, in a factory function it seems that the object returned just has all needed methods attached to it as normal properties.
function Example(message) {
let sayMessage = function() {
console.log(this.message);
};
return {
message: message,
sayMessage: sayMessage
};
}
let a = Example("Hello!");
a.sayMessage(); // Outputs "Hello!"
console.log(Object.getOwnPropertyNames(a));
// Outputs ["message", "sayMessage"]
// Modifying the prototype is pointless because
// we haven't explicitly placed any methods there
let b = Example("Hello!");
b.message = "Goodbye!";
b.sayMessage(); // Outputs "Goodbye!"
I'd like to talk about this a bit. So first of all, is avoiding the prototype part of the whole point of using a factory function in the first place? Why would we not create a function once and instead revert to duplicating it across every instance?
I like the idea that I have more control over what I expose in the final object when I use a factory function because I can keep things private in the closure, but can I still use a prototype inheritance sort of model? Or perhaps an even better question: Would I want to? One of the main arguments I found against factory functions is that they are slower than classes in performance critical applications. Well isn't this here part of the reason? Creating a whole bunch of methods for every new object sounds like a huge waste. It sounds like the whole point of having a prototype!
A lot of questions flying around here. Let me boil it all down. I'd like to hear what the rational would be for not using a prototype (or inversely for using one) and if there is a solution that includes the best of both worlds.
Edit: When I first posted the question my second code example was creating Example instances with the new keyword, which was a typo.
I've seen the revealing module pattern, as well as the prototype class pattern. I kind of adapted the two together into a revealing class pattern. I'm trying to figure out if there are any gotchas with this pattern, or things that wouldn't work. As long as the constructor for the object is typeof "function" I would think there wouldn't be any problems.
The only caveat I can come up with, is a high volume instantiated class, where the private functions are getting create every time a new class is created. This may cause memory problems for thousands of instantiations. For larger classes that don't build up in memory, though, my thinking is the easier code readability and security may be of benefit. By keeping functions private and revealing them, the obfuscator minimizes all of the internal functions, making it difficult for prying eyes. Granted, it is not foolproof, but just an extra layer of protection. Additionally, being able to work inside of the function privately, removes hundreds of occurrences of "this" within the class. Maybe it's a small advantage, but for complex code it kind of improves the readability. Anybody see any major problems with the pattern?
//standard pattern
var MyClass = function() {
this.func1 = function() {
//dostuff
};
};
MyClass.prototype.func2 = function() {
this.func1();
//dostuff
};
-
//revealing class pattern
var MyClass = function() {
function privateFunc() {
//dostuff
}
function publicFunc() {
privateFunc();
//dostuff
}
this.publicFunc = publicFunc;
};
I've seen the revealing module pattern, as well as the prototype class pattern. I kind of adapted the two together into a revealing class pattern.
You should have a look at the revealing prototype pattern which truly combines them to reveal a class. What you currently have should rather be called "revealing instance pattern".
Anybody see any major problems with the pattern?
You've stated the major caveat already, but if you consider it worth then go for it. Given that most classes are not instantiated in very large volumes this doesn't really make a difference any more.
However, you could improve your pattern. Given that you don't use the prototype any more, there's no reason to keep it. Drop the new operator, drop using the this keyword in the constructor, and return an object literal. Voilá, you've got a factory function!
I have been re-factoring someone else's JavaScript code.
BEFORE:
function SomeObj(flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version="1.1 (prototype)";
if (!this._someProperty) this._init();
// leading underscore hints at what should be a 'private' to me
this.reset(); // assumes reset has been added...
}
SomeObj.prototype.reset = function() {
/* perform some actions */
}
/* UPDATE */
SomeObj.prototype.getPrivate = function() {
return _private; // will return undefined
}
/* ...several other functions appended via `prototype`...*/
AFTER:
var SomeObj = function (flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version = "2.0 (constructor)";
this.reset = function () {
/* perform some actions */
};
/* UPDATE */
this.getPrivate = function() {
return _private; // will return true
}
/* other functions and function calls here */
}
For me the first example looks difficult to read, especially in a larger context. Adding methods like reset on like this, using the prototype property, seems much less controlled as it can presumably happen anywhere in the script. My refactored code (the second example above) looks much neater to me and is therefore easier to read because it's self-contained. I've gained some privacy with the variable declarations but I've lost the possibilities the prototype chain.
...
QUESTIONS:
Firstly, I'm interested to know what else I have lost by foregoing prototype, or if there are larger implications to the loss of the prototype chain. This article is 6 years old but claims that using the prototype property is much more efficient on a large scale than closure patterns.
Both the examples above would still be instantiated by a new operator; they are both 'classical'-ish constructors. Eventually I'd even like to move away from this into a model where all the properties and functions are declared as vars and I have one method which I expose that's capable of returning an object opening up all the properties and methods I need, which have privileges (by virtue of closure) to those that are private. Something like this:
var SomeObj = (function () {
/* all the stuff mentioned above, declared as 'private' `var`s */
/* UPDATE */
var getPrivate = function () {
return private;
}
var expose = function (flag) {
// just returns `flag` for now
// but could expose other properties
return {
flag: flag || false, // flag from argument, or default value
getPrivate: getPrivate
}
};
return {
expose: expose
}
})(); // IIFE
// instead of having to write `var whatever = new SomeObj(true);` use...
var whatever = SomeObj.expose();
There are a few answers on StackOverflow addressing the 'prototype vs. closure' question (here and here, for example). But, as with the prototype property, I'm interested in what a move towards this and away from the new operator means for the efficiency of my code and for any loss of possibility (e.g. instanceof is lost). If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing the new operator?
A looser question if I'm permitted, given that I'm asking for specifics above: if prototype and new really are the most efficient way to go, with more advantages (whatever you think they might be) than closure, are there any guidelines or design patterns for writing them in a neater fashion?
...
UPDATE:
Note that expose returns a new object each time, so this is where the instantiation happens. As I understand this, where that object refers to methods declared in the SomeObj closure, they are the same methods across all objects (unless overwritten). In the case of the flag variable (which I've now corrected), this can be inherited from the argument of expose, have a default value, or again refer back to a encapsulated pre-existing method or property. So there are instances of objects being produced and there is some inheritance (plus polymorphism?) going on here.
So to repeat question 2: If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing the new operator?
Many thanks for answers so far, which have helped to clarify my question.
In my experience, the only thing you lose by not using .prototype is memory - each object ends up owning its own copy of the function objects defined therein.
If you only intend instantiating "small" numbers of objects this is not likely to be a big problem.
Regarding your specific questions:
The second comment on that linked article is highly relevant. The author's benchmark is wrong - it's testing the overhead of running a constructor that also declares four inner functions. It's not testing the subsequent performance of those functions.
Your "closure and expose" code sample is not OO, it's just a namespace with some enclosed private variables. Since it doesn't use new it's no use if you ever hope to instantiate objects from it.
I can't answer this - "it depends" is as good an answer as you can get for this.
Answers:
You already answer this question: you loose the prototype chain. (Actually you don't loose it, but your prototype will be always empty). The consequences are:
There is a little performance/memory impact, because methods are created for each instance . But it depends a lot on the JavaScript engine, and you should worry about it only if you need to create a big amount of objects.
You can't monkey patch instances by modifying the prototype. Not a big issue either, since doing that leads to a maintenance nightmare.
Let me do a small pedantic correction to your question: Is not a matter of "prototype vs closure", in fact the concept of closure is orthogonal to a prototype based language.
The question is related on how you are going to create objects: define an new object from zero each time, or clone it from a prototype.
The example that you show about using functions to limit the scope, is a usual practice in JavaScript, and you can continue doing that even if you decide to use prototypes. For example:
var SomeObj = (function (flag) {
/* all the stuff mentioned above, declared as 'private' `var`s */
var MyObj = function() {}
MyObj.prototype = {
flag: flag,
reset: reset
};
return {
expose: function() { return new MyObj(); }
}
})();
If you are worried about modularization, take a look into requirejs which is an implementation of a technique called AMD (async module definition). Some people doesn't like AMD and some people love it. My experience with it was positive: it helped me a lot to create a modular JavaScript app for the browser.
There are some libraries to make your life with prototypes easier: composejs, dejavu, and my own barman (yes is a shameless self promotion, but you can look into the source code to see ways of dealing with definitions of objects).
About patterns: Since you can easily hide object instantiation using factory methods, you can still use new to clone a prototype internally.
What else I have lost by foregoing prototype?
I'm sure someone can provide an answer, but I'll at least give it a shot. There are at least two reasons to use prototype:
prototype methods can be used statically
They are created only once
Creating a method as an object member means that it is created for every instance of the object. That's more memory per object, and it slows down object creation (hence your efficiency). People tend to say that prototype methods are like class methods whereas member methods are like object methods, but this is very misleading since methods in the prototype chain can still use the object instance.
You can define the prototype as an object itself, so you may like the syntax better (but it's not all that different):
SomeObj.prototype = {
method1: function () {},
method2: function () {}
}
Your argument that it seems less controlled is valid to me. I get that it is weird to have two blocks involved in creating an object. However, it's a bit spurious in that there is nothing stopping someone from overwriting the prototype of your other object anyway.
//Your code
var SomeObj = function (flag) { //...
//Sneaky person's code
delete SomeObj.reset;
SomeObj.prototype.reset = function () { /* what now? */ }
Foregoing new
If you're only going to be creating specific object instances on the fly via {} notation, it's not really different from using new anyway. You would need to use new to create multiple instances of the same object from a class (function) definition. This is not unusual as it applies to any object oriented programming language, and it has to do with reuse.
For your current application, this may work great. However, if you came up with some awesome plugin that was reusable across contexts, it could get annoying to have to rewrite it a lot. I think that you are looking for something like require.js, which allows you to define "modules" that you can import with the require function. You define a module within a define function closure, so you get to keep the constructor and prototype definitions wrapped together anyway, and no one else can touch them until they've imported that module.
Advantages of closure vs. prototype
They are not mutually exclusive:
var attachTo = {};
;(function (attachTo, window, document, undefined) {
Plugin = function () { /* constructor */ };
Plugin.prototype = { /* prototype methods */ };
attachTo.plugin = Plugin;
})(attachTo, window, document);
var plugin = new (attachTo.plugin);
http://jsfiddle.net/ExplosionPIlls/HPjV7/1/
Question by question:
Basically, having the reset method in the prototype means that all instances of your constructor will share the exact same copy of the method. By creating a local method inside the constructor, you'll have one copy of the method per instance, which will consume more memory (this may become a problem if you have a lot of instances). Other than that, both versions are identical; changing function SomeObj to var SomeObj = function only differs on how SomeObj is hoisted on its parent scope. You said you "gained some privacy with the variable declarations", but I didn't see any private variables there...
With the IIFE approach you mentioned, you'll lose the ability to check if instance instanceof SomeObj.
Not sure if this answers your question, but there is also Object.create, where you can still set the prototype, but get rid of the new keyword. You lose the ability to have constructors, though.
I have been looking deeply into JavaScript lately to fully understand the language and have a few nagging questions that I can not seem to find answers to (Specifically dealing with Object Oriented programming).
Assuming the following code:
function TestObject()
{
this.fA = function()
{
// do stuff
}
this.fB = testB;
function testB()
{
// do stuff
}
}
TestObject.prototype = {
fC : function
{
// do stuff
}
}
What is the difference between functions fA and fB? Do they behave exactly the same in scope and potential ability? Is it just convention or is one way technically better or proper?
If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as fC even be worthwhile? Is there any benefit to doing so? Is the prototype only really useful when dealing with many instances of an object or inheritance?
And what is technically the "proper" way to add methods to the prototype the way I have above or calling TestObject.prototype.functionName = function(){} every time?
I am looking to keep my JavaScript code as clean and readable as possible but am also very interested in what the proper conventions for Objects are in the language. I come from a Java and PHP background and am trying to not make any assumptions about how JavaScript works since I know it is very different being prototype based.
What is the difference between functions fA and fB
In practice, nothing. The primary difference between a function expression (fA) and a function declaration (fB) is when the function is created (declared functions are available before any code is executed, whereas a function expression isn't available until the expression is actually executed). There are also various quirks associated with function expressions that you may stumble across.
In the example, I'd use a function expression, simply because declaring a function expression, then assigning the result seems a bit abstracted. But there is nothing "right" or "wrong" about either approach.
If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as fC even be worthwhile?
No. Just about everyone who goes does inheritance finds that plain objects are often simpler and therefore "better". Prototype inheritance is very handy for patching built–in objects though (e.g. adding Array.prototype.each where absent).
And what is technically the "proper" way to add methods to the prototype…
There isn't one. Replacing the default prototype with some other object seems like a bit of a waste, but assigning an object created by a literal is perhaps tidier and easier to read that sequential assignments. For one or two assignments, I'd use:
Constructor.prototype.method = function(){…}
but for lots of methods I'd use an object literal. Some even use a classic extend function and do:
myLib.extend(Constructor.prototype, {
method: function(){…}
});
Which is good for adding methods if some have already been defined.
Have a look at some libraries and decide what you like, some mix and match. Do whatever suits a particular circumstance, often it's simply a matter of getting enough code to all look the same, then it will look neat whatever pattern you've chosen.
fA and fB are effectively the same and it is just a matter of convention.
If there is only one instance of a object I wouldn't even use a constructor function, but rather just a object literal, such as:
var o = {
fA: function () { ... },
fB: function () { ... },
fC: function () { ... }
};
As for adding it to an instance or a prototype, the instance is slightly more efficient than adding it to the prototype if you only have one instance but, as I said, use a literal instead.
I avoid declaring functions in the constructor because each invocation of the constructor will create new object representing each function. These objects are not very large they tend to add up if many objects are created. If the functions can be moved to the prototype, it is much more efficient to do so.
As for adding to the prototype, I favor the
TestObject.prototype.functionName = function () { };
style but it is a matter of preference. I like the above because it looks the same whether you are extending the prototype or creating the intial prototype.
Also are there any definitive JavaScript style guides or documentation
about how JavaScript operates at a low level?
Damn no javascript programmer should ever miss "Professional JavaScript for Web Developers". This is a fantastic book, that goes into the deep. It explains objects, class emulation, functions, scopes and much much more. It is also a JS reference.
And what is technically the "proper" way to add methods to the
prototype the way I have above or calling
TestObject.prototype.functionName = function(){} every time?
As for the way to define classes, I would recommend to have a look at various JS MVC frameworks (like Spine.js, which is lightweight ). You do not need the whole of them, just their class emulation libraries. The main reason for this, is that JS does not have the concept of classes, rather it is purely consisted of objects and prototypes. On the other hand classes can be perfectly emulated (please do not take the word emulated as it is something missing). As this needs some discipline from the programmer, it is better to have a class emulation library to do the job and make you code cleaner.
The standard methods that a programmer should expect of a class emulation library are:
// define a new Class
var AClass = Class.create({
// Object members (aka prototype),
// usually an initialize() method is called if it is defined
// as the "constructor" function when a class object is created
}, {
// Static members
});
// create a child class that inherits methods and attributes from parent
var ChildClass = AClass.extend({
// Child object members
},{
// Child static members
});
AClass.include({
// Object amendments (aka mixin), methods and attributes
// to be injected to the class object
},{
// Statics amendments, methods and attributes to be
// injected as class static members
});
// check object type
var aObj = new AClass();
aObj instanceof AClass; // true
aObj instanceof ChildClass; // false
var cObj = new ChildClass();
cObj instanceof AClass; // true
cObj instanceof ChildClass; // true
I answer for first part: there is no differences, when you declare function not as variable then declaration of it rises in the block, so
...
func();
...
function func () { ... }
is equal to
var func = function () { ... };
...
func();
...
So your code
function TestObject () {
this.fA = function() { // do stuff };
this.fB = testB;
function testB() { // do stuff }
}
is equal to
function TestObject () {
var testB = function () { // do stuff };
this.fA = function () { // do stuff };
this.fB = testB;
}
I have been doing research to come up with a standardized Javascript coding style for my team. Most resources now recommend the "Module" pattern that involves closures, such as this:
var Module = function() {
someMethod = function() { /* ... */ };
return {
someMethod: someMethod
};
}();
and invoke it like Module.someMethod();. This approach seems to only work with methods that would be static in a traditional OOP context, for example repository classes to fetch/save data, service layers to make outside requests, and the like. Unless I missed something, the module pattern isn't intended to be used with data classes (think DTOs) that would typically need to be passed to/from the service methods to the UI glue code.
A common benefit I see cited is that you can have true private methods and fields in Javascript with the module pattern, but this can also be achieved along with being able to have static or instance methods with the "classical" Javascript style similar to this:
myClass = function(param) {
// this is completely public
this.publicProperty = 'Foo';
// this is completely private
var privateProp = param;
// this function can access the private fields
// AND can be called publicly; best of both?
this.someMethod = function() {
return privateProp;
};
// this function is private. FOR INTERNAL USE ONLY
function privateMethod() {
/* ... */
};
}
// this method is static and doesn't require an instance
myClass.staticMethod = function() { /* ... */ };
// this method requires an instance and is the "public API"
myClass.prototype.instanceMethod = function() { /* ... */ };
So I guess my question is what makes the Module Pattern better than the traditional style? It's a bit cleaner, but that seems to be the only benefit that is immediately apparent; in fact, the traditional style seems to offer the ability to provide real encapsulation (similar to true OOP languages like Java or C#) instead of simply returning a collection of static-only methods.
Is there something I'm missing?
Module Pattern can be used to create prototypes as well see:
var Module = function() {
function Module() {};
Module.prototype.whatever = function() {};
return Module
}();
var m = new Module();
m.whatever();
As the other poster said the clean global namespace is the reason for it. Another way to achieve this however is to use the AMD pattern, which also solves other issues like dependency management. It also wraps everything in a closure of sorts. Here's a great Introduction to AMD which stands for Asynchronous Module Definition.
I also recommend reading JavaScript Patterns as it thoroughly covers the reasons for various module patterns.
The module pattern above is pointless. All you are doing is using one closure to return a constructor with prototype. You could have achieved the same with:
function Module() {};
Module.prototype.whatever = function() {};
var m = new Module();
m.whatever();
In fact you would have saved one object (the closure) from being created, with the same output.
My other beef with the module pattern is it if you are using it for private encapsulation, you can only use it easily with singletons and not concrete classes. To create concrete classes with private data you end up wrapping two closers, which gets ugly. I also agree that being able to debug the underscore pseudo-private properties is a lot easier when they are visible. The whole notion of "what if someone uses your class incorrectly" is never justified. Make a clean public API, document it, and if people don't follow it correctly, then you have a poor programmer in your team. The amount of effort required in Javascript to hide variables (which can be discovered with eval in Firefox) is not worth the typical use of JS, even for medium to large projects. People don't snoop around your objects to learn them, they read your docs. If your docs are good (example using JSDoc) then they will stick to it, just like we use every 3rd party library we need. We don't "tamper" with jQuery or YUI, we just trust and use the public API without caring too much how or what it uses underneath.
Another benefit you forgot to mention to the module pattern, is that it promotes less clutter in the global namespace, ie, what people refer to when they say "don't pollute the global namespace". Sure you can do that in your classical approach, but it seems creating objects outside of anonymous functions would lend more easily to creating those objects in the global namespace.
In other words, the module pattern promotes self contained code. A module will typically push one object onto the global namespace, and all interaction with the module goes through this object. It's like a "main" method.
Less clutter in the global namespace is good, because it reduces the chances of collisions with other frameworks and javascript code.