Could someone explain the fundamental difference between:
define(['backbone'], function(Backbone) {
MyModel = Backbone.Model.extend({
});
});
define(['backbone', 'models/mymodel'], function(Backbone){
var app = Backbone.View.extend({
initialize: function() {
var model = new MyModel();
}
});
});
and:
define(['backbone'], function(Backbone) {
var MyModel = Backbone.Model.extend({
});
return MyModel;
});
define(['backbone', 'models/mymodel'], function(Backbone, MyModel){
var app = Backbone.View.extend({
initialize: function() {
var model = new MyModel();
}
});
});
In the former, the first module simply defines MyModel. In the latter, it's created as a variable and returned, and the second module needs to have it put in the parameters when imported.
RequireJS examples I see around seem to vary between the two, but I don't really understand the difference - does one return an instance and the other a constructor?
In my application I didn't even notice that I was actually using both ways in different places, and I think it was causing problems. I was using a lot of
self = this
self.model.doSomething
inside my views and models, and as my app got bigger, I started getting errors because there were conflicts with definitions of self.
Short Version: 1st version == wrong.
Medium Version: The first one bypasses Require entirely by using global variables, while the second one actually uses Require.
Long version:
The way Backbone modules work is that you run "define", pass it a function (and usually an array of dependencies also), and whatever gets returned from that function is defined as that module. So if I do:
// Inside foo.js
define([], function() {
return 1;
});
I've defined the "foo" module to be 1, so if elsewhere I do:
define(['foo'], function(foo) {
alert(foo); // alerts 1
});
Your first version doesn't return anything, so it's not actually creating a Require module at all.
How does it work then? Well, in that version you do:
MyModel = Backbone.Model.extend({
NOT:
var MyModel = Backbone.Model.extend({
So that's really the same as doing:
window.MyModel = Backbone.Model.extend({
Then when the second part of the code runs, it access window.MyModel, and works ... but it's completely bypassing Require.js in the process.
I think the most important thing to takeaway is: ALWAYS DECLARE (ie. var) YOUR JAVASCRIPT VARIABLES. I don't agree with everything Crockford says, but he's dead right on this one. You will get lots of bugs (with Require and without) if you don't make this a habit.
Beyond that, the next most important thing is probably: ALWAYS RETURN SOMETHING FROM THE FUNCTION YOU PASS TO define. There are certain special cases where you don't want to return anything, but unless you are deliberately trying to solve one of those cases you should always return something to define the module.
Finally, if you're using Require, every variable in your code should either:
Come from the define function (ie. it should be an argument variable from the function that you pass to define), or
It should be declared (ie. var-ed ) inside that file
If you use JSLint or 'use strict'; (as Valentin Nemcev suggested), or if you use an editor like Eclipse, your tools can help you ensure this (and in fact make it easy to ensure).
MyModel = Backbone.Model.extend({});
Here you are not returning a constructor, you are defining a global variable and accessing it later in different module.
Actually it is wrong, it works by accident. You should return your modules from define and access them via parameters in other modules.
Like this:
return Backbone.Model.extend({});
You should use strict mode to avoid problems with global variables in JS.
Also, constructor in JS is just a function that is meant to be run with new. Backbone extend always returns a constructor function, and you create a model instance by calling the constructor with new, like you are doing in both examples.
Related
I am in doubt if the following design pattern would cause a memory leak.
I have been using it for some time with success, but I haven't seen this pattern used by others, so I'd like some confirmation if you see something wrong with it.
As from next month I have to start working on a large project, and I want to know for sure that I can use this without problems, or if I should use another strategy.
controller.js:
var Controller = function(options){
};
Controller.prototype.makeView = function(options){
options.controller = this;
options.otheroption = options.otheroption;
var view = new View(options);
};
Controller.prototype.getModel = function(options){
//--- Get model ---
var model = new Model();
var promise = model.fetch();
return promise;
});
view.js:
var View = Backbone.View.extend({
initialize: function(options){
this.controller = options.controller;
this.otheroption = options.otheroption;
},
getModel: function(){
var promise = this.controller.getModel();
promise.done(_.bind(function(model){
//Do something with the returned model instance
}, this));
};
});
Instantiate controller, eg. from the router, or another controller:
//--- Instantiate the controller and build the view ---//
var controller = new Controller();
controller.makeView(options)
To me, this doesn't look like a circular reference, because both the controller and view are declared as a local variable.
Yet the instantiated view can access the controller functions, which allows me to isolate the RESTful server interactions via models / collections that the view uses.
For me it would seem as if the only reference remaining would be the view that keeps a reference to the controller object.
What I do afterwards is clean up the view (I destroy the instance and its references when I don't need it anymore.
Your opinion on this pattern is highly appreciated.
My purpose is to isolate creation of views / server interactions in separate controller files: if you see holes in my method and have a better way of doing it, please share.
Thanks.
Short answer: There is no memory leak problem in the code you have posted. The view holds a reference to the controller, but not vice versa. So as long as the controller lives longer than the view, that reference does not keep your objects from being garbage-collected. I don't see a circular reference anywhere in your code.
Longer answer: The pitfalls would be in the code you haven't posted. In particular, any event handlers in your view must be cleaned up properly, otherwise your views never fade into oblivion. But you have said in your question that you clean up your view, so I guess you are aware of that sort of problem.
What controller doing is here looks like a utility to me. Could have been easily managed by a global level singleton. I see some issues in first glance.
Code repetition, assuming you would creating separate Controller for different types of Models and Views, makeView and getModel code needs to be repeated for each controller. If you extending from a BaseController, then you need to pass View and Model Class to getModel and makeView functions.
How do you handle a use-case where you have to use same model in different Views?
makeView and getModel is designed assuming for each makeView there would be a getModel call, in assumed order
I would rather write a utility function which can create and deploy views for me.
var deployView = function(view, config){
//do the view rendering
view.render();
view.$el.appendTo(config.el);
}
var createView = function(config) {
var view;
var viewType = 'model';
if (config.collection || config.Collection) {
viewType = 'collection';
}
if (viewType === 'model') {
if (config.Model) {
config.model = new config.Model(config.modelAttributes);
//fetch if needed
}
} else {
if (config.Collection) {
config.collection = new config.Collection(config.items);
//fetch if needed
}
}
var filteredConfig = _.omit(config, 'Collection', 'Model', 'View');
view = new config.View(filteredConfig);
deployView(view, filteredConfig)
}
JavaScript implementations haven't had a problem with circular references for a long time. (IE6 did have a memory leak from circular references if I recall correctly, which wasn't shared by any other major browser from that period.)
Modern JavaScript implementations perform garbage collection through a "mark and sweep" algorithm. First they scan through your web app's entire memory structure starting from the global object, and mark everything they find. Then they sweep through every object stored in memory and garbage collect anything that wasn't marked. As long as there isn't a reference to your object from the global object or any stored function, it can be garbage collected.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management#Mark-and-sweep_algorithm
You're probably thinking of a reference counting-based implementation, which does have issues with memory leaks from circular references. In that implementation as long as one object contained a reference to another, that second object can't be garbage collected. That method was once used in web browsers but not anymore.
Nowadays, most memory leaks are from globally-accessible objects you forget to clean up and accidentally retaining data in function closures (a function that creates another function and passes/saves it somewhere). Since the closure's local variables can be accessed by the function created inside of them, they have to be retained as long as that function exists.
So go ahead and add all the circular references you want. Unless you need to target IE6, your code's fine.
I am building an application using Durandal and I have the need to share some functionality across view models.
I have 5 screens to build and they are all virtually the same screen except that in the activate function they will call to a different api end points but otherwise the view and view models will be identical.
Is there a pattern that I should be following to structure this correctly to promote code reuse?
If the views and the view models are identical except for calling different api actions, what about just taking in a parameter as part of the route? Then in the activate function, you can switch on the parameter. The route values can be designated so that your url is relevant, like [http://site/page/subtype], where subtype is the parameter (instead of using numeric values)
Regarding inheritance, depending on the features you need, there's so many ways to do JavaScript inheritance it can be a little confusing. There are some full-featured inheritance models provided by libraries such as base2 and Prototype. John Resig also has an inheritance model that I've used successfully.
In general, I prefer to stick to simpler solutions when it comes to JS inheritance. If you need a pretty much the full set of inheritance features, those libraries are good to consider. If you only really care about accessing a set of properties and functions from a base class, you might be able to get by with just defining the view model as a function, and replacing the function's prototype with the desired base class. Refer to Mozilla's Developer Docs for good info on inheritance.
Here's a sample:
//viewModelBase
define(function (require) {
"use strict";
function _ctor() {
var baseProperty = "Hello from base";
function baseFunction() {
console.log("Hello from base function");
}
//exports
this.baseProperty = baseProperty;
this.baseFunction = baseFunction;
};
//return an instance of the view model (singleton)
return new _ctor();
});
//view model that inherits from viewModelBase
define(function (require) {
"use strict";
function _ctor() {
var property1 = "my property value";
function activate() {
//add start up logic here, and return true, false, or a promise()
return true;
}
//exports
this.activate = activate;
this.property1 = property1;
};
//set the "base"
var _base = require("viewModelBase");
_ctor.prototype = _base;
_ctor.prototype.constructor = _ctor;
//return an instance of the view model (singleton)
return new _ctor();
});
Keep in mind this example all results in what effectively is a singleton (i.e. you'll only get the same instance, no matter how many times you require() it)
If you want a transient (non-singleton) just return _ctor. Then you'll need to instantiate a new instance after you require() it.
One more note, in general, functions should be defined on the prototype, not within the constructor function itself. See this link for more information on why. Because this example results in only a single instance, it's a moot point, so the functions are inside the constructor for improved readability and also the ability to access the private vars and functions.
So I'm writing a whole bunch of vendor-specific files in node which all have a similar controller pattern, so it makes sense for me to cut them out and put into a common file.
You can see my common controller file here: https://gist.github.com/081a04073656bf28f46b
Now when I use them in my multiple modules, each consecutively loaded module is overwriting the first. This is because the file is only required once and passed dynamically through to each module on load (this allows me to add extra modules and these modules are able to add their own routes, for example). You can see an example module here: https://gist.github.com/2382bf93298e0fc58599
You can see here on line 53 I've realised that we need to create a seperate instance every time, so I've tried to create a new instance by copying the standardControllers object into a new object, then initialising the new object. This has zero impact on the code, and the code behaves in exactly the same way.
Any ideas guys? I'm in a bit of a jam with this one!
First thing I'd do is try to make things simpler and reduce coupling by invoking the single responsibility principle, et al.
http://www.codinghorror.com/blog/2007/03/curlys-law-do-one-thing.html
Put those Schemas into their own files, eg
models/client.js
models/assistant.js
models/contact.js
I've also found that embedded docs + mongoose is generally a PITA. I'd probably promote all those to top level docs.
You don't need to enclose your object's keys in quotes.
routes = {
list: function() {} // no quotes is aok
}
Also 'list' in typical REST apps is called 'index'. Anyway.
Ok, I'd break this up differently. Since you're requiring stuff from the index.js file in the middleware, they become tightly coupled, which is bad. in fact, I think I'd rewrite this whole thing so it was tidier. Sorry.
I'd probably replace your 'middleware' file with an express-resource controller
https://github.com/visionmedia/express-resource (built by author of express). This is a good framework for restful controllers, such as what you're building. The auto-loader is really sweet.
You may also want to look at: http://mcavage.github.com/node-restify/ It's new, I haven't tried it out, but I've heard good things.
Since what you're building is basically an automated mongoose-crud system, with optional overriding, I'd create an express-resource controller as your base
/controllers/base_controller.js
and it might look like
var BaseController = function() {} // BaseController constructor
BaseController.prototype.index = function() {
// copy from your middleware
}
BaseController.prototype.show = function() {
// copy from your middleware
}
BaseController.prototype.create = function() {
// copy from your middleware
}
// etc
module.exports = BaseController
Then I'd do something like:
/controllers/some_resource_controller.js
which might look something like:
var BaseController = require('./base_controller')
var NewResourceController = function() {
// Apply BaseController constructor (i.e. call super())
BaseController.apply(this, arguments)
}
NewResourceController.prototype = new Base()
NewResourceController.prototype.create = function() {
// custom create method goes here
}
module.exports = NewResourceController
Then to use it, you can do:
var user = app.resource(myResourceName, new ResourceController());
…inside some loop which sets myResourceName to be whatever crud you're trying to set up.
Here's some links for you to read:
http://tobyho.com/2011/11/11/js-object-inheritance/
http://yehudakatz.com/2011/08/12/understanding-prototypes-in-javascript/
Also, it sounds like you're not writing tests. Write tests.
http://www.codinghorror.com/blog/2006/07/i-pity-the-fool-who-doesnt-write-unit-tests.html
Recently I'm having an argument with some co-workers about something that I
find incorrect.
We're using Backbone in a large application and my way to create views is
the 'standard' backbone way :
var MyView = Backbone.View.extend({
className: 'foo',
initialize: function() {
_.bindAll(this, 'render' /* ... more stuff */);
},
render: function() {
/* ... render, usually
using _.template and passing
in this.model.toJSON()... */
return this;
}
});
But someone in the team recently decided to do it this way :
var MyView = Backbone.View.extend( (function() {
/* 'private stuff' */
function bindMethods(view) {
_.bindAll(view, /* ... more stuff */);
};
function render(view) {
/* ... render, usually
using _.template and passing
in view.model.toJSON()... */
};
return {
className: 'foo',
initialize: function() {
bindMethods(this);
render(this);
}
};
}());
That's the idea in pseudo-code .
Having read the BB source and read tutorials, articles I find this to be a
bad practice (for me it makes no sense), but I'd love some feedback from
other Backbone developers/users
Thanks in advance
One benefit I see from using the closure is providing a private scope for variables and functions that you don't want to be accessible from code outside the view.
Even so, I haven't seen many Backbone apps use a closure to define a view/model/collection etc.
Here's an email from Jeremy Ashkenas concerning this issue as well.
Yes, using closures to create instances of objects with private variables is possible in JavaScript. But it's a bad practice, and should be avoided. This has nothing to do with Backbone in particular; it's the nature of OOP in JavaScript.
If you use the closure pattern (also known as the "module" pattern), you're creating a new copy of each function for each instance you create. This completely ignores prototypes, and is terribly inefficient both in terms of speed and especially in terms of memory use. If you make 10,000 models, you'll also have 10,000 copies of each member function. With prototypes (with Backbone.Model.extend), you'll only have a single copy of each member function, even if there are 10,000 instances of the class.
I totally agree with Paul here. Sometimes you may find it necessary to define methods and properties that are private and can't be messed with from outside. I think it depends wether you need this scoping mechanism in your class or not. Mixing both approaches, with respect to the requirements you have for the class wouldn't be so bad, would it?
How much can I stretch RequireJS to provide dependency injection for my app? As an example, let's say I have a model that I want to be a singleton. Not a singleton in a self-enforcing getInstance()-type singleton, but a context-enforced singleton (one instance per "context"). I'd like to do something like...
require(['mymodel'], function(mymodel) {
...
}
And have mymodel be an instance of the MyModel class. If I were to do this in multiple modules, I would want mymodel to be the same, shared instance.
I have successfully made this work by making the mymodel module like this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Is this type of usage expected and common or am I abusing RequireJS? Is there a more appropriate way I can perform dependency injection with RequireJS? Thanks for your help. Still trying to grasp this.
This is not actually dependency injection, but instead service location: your other modules request a "class" by a string "key," and get back an instance of it that the "service locator" (in this case RequireJS) has been wired to provide for them.
Dependency injection would involve returning the MyModel constructor, i.e. return MyModel, then in a central composition root injecting an instance of MyModel into other instances. I've put together a sample of how this works here: https://gist.github.com/1274607 (also quoted below)
This way the composition root determines whether to hand out a single instance of MyModel (i.e. make it singleton scoped) or new ones for each class that requires it (instance scoped), or something in between. That logic belongs neither in the definition of MyModel, nor in the classes that ask for an instance of it.
(Side note: although I haven't used it, wire.js is a full-fledged dependency injection container for JavaScript that looks pretty cool.)
You are not necessarily abusing RequireJS by using it as you do, although what you are doing seems a bit roundabout, i.e. declaring a class than returning a new instance of it. Why not just do the following?
define(function () {
var value = 10;
return {
doStuff: function () {
alert(value);
}
};
});
The analogy you might be missing is that modules are equivalent to "namespaces" in most other languages, albeit namespaces you can attach functions and values to. (So more like Python than Java or C#.) They are not equivalent to classes, although as you have shown you can make a module's exports equal to those of a given class instance.
So you can create singletons by attaching functions and values directly to the module, but this is kind of like creating a singleton by using a static class: it is highly inflexible and generally not best practice. However, most people do treat their modules as "static classes," because properly architecting a system for dependency injection requires a lot of thought from the outset that is not really the norm in JavaScript.
Here's https://gist.github.com/1274607 inline:
// EntryPoint.js
define(function () {
return function EntryPoint(model1, model2) {
// stuff
};
});
// Model1.js
define(function () {
return function Model1() {
// stuff
};
});
// Model2.js
define(function () {
return function Model2(helper) {
// stuff
};
});
// Helper.js
define(function () {
return function Helper() {
// stuff
};
});
// composition root, probably your main module
define(function (require) {
var EntryPoint = require("./EntryPoint");
var Model1 = require("./Model1");
var Model2 = require("./Model2");
var Helper = require("./Helper");
var entryPoint = new EntryPoint(new Model1(), new Model2(new Helper()));
entryPoint.start();
});
If you're serious about DI / IOC, you might be interested in wire.js: https://github.com/cujojs/wire
We use a combination of service relocation (like Domenic describes, but using curl.js instead of RequireJS) and DI (using wire.js). Service relocation comes in very handy when using mock objects in test harnesses. DI seems the best choice for most other use cases.
Not a singleton in a self-enforcing getInstance()-type singleton, but
a context-enforced singleton (one instance per "context").
I would recommend it only for static objects. It's perfectly fine to have a static object as a module that you load using in the require/define blocks. You then create a class with only static properties and functions. You then have the equivalent of the Math Object that has constants like PI, E, SQRT and functions like round(), random(), max(), min(). Great for creating Utility classes that can be injected at any time.
Instead of this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Which creates an instance, use the pattern for a static object (one where values are always the same as the Object never gets to be instantiated):
define(function() {
return {
value: 10
};
});
or
define(function() {
var CONSTANT = 10;
return {
value: CONSTANT
};
});
If you want to pass an instance (the result of using a Module that have return new MyModel();), then, within an initialize function, pass a variable that capture the current state / context or pass on the Object that contains information on state / context that your modules needs to know about.