Given the following models:
(note: these are simplified for illustration purposes)
App.CustomerOrder = DS.Model.extend({
deliveries: DS.hasMany('delivery'),
total: DS.attr('number')
});
App.Delivery = DS.Model.extend({
orderlines: DS.hasMany('orderline')
});
App.OrderLine = DS.Model.extend({
productid: DS.attr('string'),
qtyordered: DS.attr('number')
});
When the app first loads I'm querying an API that sends me information about which dependencies should trigger an update. So for example it'll send me something like:
CustomerOrder: ["deliveries", "deliveries.orderlines", "deliveries.orderlines.qtyordered"...]
..means, if deliveries are added/deleted from a customerorder or if lines are added/deleted from a delivery attached to a customer order or if the qtyordered on an orderline on a delivery attached to a customer order, then what the API expects is for me to serialize CustomerOrder (along with the entire chain of relationships) and sent to an 'update' service (i.e. server/customerorder/updates type thing) that will run various routines and fill-in pieces of data and send the entire chain of objects back.
For illustration purposes I've put a simple example on here of an ordertotal (I realize this is easily calculated client-side but there's a bunch of other stuff that would be duplication of code from the server). So, if the qtyordered on an orderline changes, I need to send the customerorder instance to the server, where it will update my total field.
One of the challenges is that I can't hard code that dependency list by setting up observer functions with .observes() type stuff, it has to be done dynamically after that dependency data is loaded (presumably using addObserver). The other is that observers wont dig multiple layers deep like that.
I've tried using a mix-in to the models that overrides the init function and does exactly that.
clientchangeset: DS.attr('raw'),
init: function() {
this._super.apply(this, arguments);
var className = this.auth.camelizedModelString(this.constructor.toString());
var watchlist = this.auth.dependencies[className] || null;
var self = this;
watchlist.forEach(function(watch) {
if(watch.hasOwnProperty('attributeName') && watch.hasOwnProperty('collectionFlag')) {
// {attributeName: attributeName, collectionFlag: collectionFlag}
if(watch['collectionFlag']) {
console.log(className+'.addObserver('+watch['attributeName']+'.#each.clientchangeset)');
self.addObserver(watch['attributeName']+'.#each.clientchangeset', null, 'updateChangelist');
} else {
console.log(className+'.addObserver('+watch['attributeName']+')');
self.addObserver(watch['attributeName'], null, 'updateChangelist');
}
}
});
},
This appears to work, but only one layer deep. For completeness, heres the updateChangelist function:
updateChangelist: function(src, field, value) { //jshint ignore:line
if(this.get('pauseUpdates')) {
return;
}
var className = this.auth.camelizedModelString(this.constructor.toString());
var oldclientchangeset = this.get('clientchangeset') || [];
console.log('Before: '+className+'.[clientchangeset]= '+oldclientchangeset);
oldclientchangeset.pushObject(field);
this.set('clientchangeset', oldclientchangeset);
console.log('After: '+className+'.[clientchangeset]= '+oldclientchangeset);
}
So in general the way I got this to work was to create the observers as indicated, but the handlers simply update a property called '_needsUpdate' on each level of the relationships whenever they are triggered. '_needsUpdate' is just a date so when triggered I do:
this.set('_needsUpdate', +new Date());
Then when setting up observers at each level for that level's children, I just set up a single observer to look at child.#each._needsUpdate.
Related
I'm reading about Flux but the example Todo app is too simplistic for me to understand some key points.
Imagine a single-page app like Facebook that has user profile pages. On each user profile page, we want to show some user info and their last posts, with infinite scroll. We can navigate from one user profile to another one.
In Flux architecture, how would this correspond to Stores and Dispatchers?
Would we use one PostStore per user, or would we have some kind of a global store? What about dispatchers, would we create a new Dispatcher for each “user page”, or would we use a singleton? Finally, what part of the architecture is responsible for managing the lifecycle of “page-specific” Stores in response to route change?
Moreover, a single pseudo-page may have several lists of data of the same type. For example, on a profile page, I want to show both Followers and Follows. How can a singleton UserStore work in this case? Would UserPageStore manage followedBy: UserStore and follows: UserStore?
In a Flux app there should only be one Dispatcher. All data flows through this central hub. Having a singleton Dispatcher allows it to manage all Stores. This becomes important when you need Store #1 update itself, and then have Store #2 update itself based on both the Action and on the state of Store #1. Flux assumes this situation is an eventuality in a large application. Ideally this situation would not need to happen, and developers should strive to avoid this complexity, if possible. But the singleton Dispatcher is ready to handle it when the time comes.
Stores are singletons as well. They should remain as independent and decoupled as possible -- a self-contained universe that one can query from a Controller-View. The only road into the Store is through the callback it registers with the Dispatcher. The only road out is through getter functions. Stores also publish an event when their state has changed, so Controller-Views can know when to query for the new state, using the getters.
In your example app, there would be a single PostStore. This same store could manage the posts on a "page" (pseudo-page) that is more like FB's Newsfeed, where posts appear from different users. Its logical domain is the list of posts, and it can handle any list of posts. When we move from pseudo-page to pseudo-page, we want to reinitialize the state of the store to reflect the new state. We might also want to cache the previous state in localStorage as an optimization for moving back and forth between pseudo-pages, but my inclination would be to set up a PageStore that waits for all other stores, manages the relationship with localStorage for all the stores on the pseudo-page, and then updates its own state. Note that this PageStore would store nothing about the posts -- that's the domain of the PostStore. It would simply know whether a particular pseudo-page has been cached or not, because pseudo-pages are its domain.
The PostStore would have an initialize() method. This method would always clear the old state, even if this is the first initialization, and then create the state based on the data it received through the Action, via the Dispatcher. Moving from one pseudo-page to another would probably involve a PAGE_UPDATE action, which would trigger the invocation of initialize(). There are details to work out around retrieving data from the local cache, retrieving data from the server, optimistic rendering and XHR error states, but this is the general idea.
If a particular pseudo-page does not need all the Stores in the application, I'm not entirely sure there is any reason to destroy the unused ones, other than memory constraints. But stores don't typically consume a great deal of memory. You just need to make sure to remove the event listeners in the Controller-Views you are destroying. This is done in React's componentWillUnmount() method.
(Note: I have used ES6 syntax using JSX Harmony option.)
As an exercise, I wrote a sample Flux app that allows to browse Github users and repos.
It is based on fisherwebdev's answer but also reflects an approach I use for normalizing API responses.
I made it to document a few approaches I have tried while learning Flux.
I tried to keep it close to real world (pagination, no fake localStorage APIs).
There are a few bits here I was especially interested in:
It uses Flux architecture and react-router;
It can show user page with partial known info and load details on the go;
It supports pagination both for users and repos;
It parses Github's nested JSON responses with normalizr;
Content Stores don't need to contain a giant switch with actions;
“Back” is immediate (because all data is in Stores).
How I Classify Stores
I tried to avoid some of the duplication I've seen in other Flux example, specifically in Stores.
I found it useful to logically divide Stores into three categories:
Content Stores hold all app entities. Everything that has an ID needs its own Content Store. Components that render individual items ask Content Stores for the fresh data.
Content Stores harvest their objects from all server actions. For example, UserStore looks into action.response.entities.users if it exists regardless of which action fired. There is no need for a switch. Normalizr makes it easy to flatten any API reponses to this format.
// Content Stores keep their data like this
{
7: {
id: 7,
name: 'Dan'
},
...
}
List Stores keep track of IDs of entities that appear in some global list (e.g. “feed”, “your notifications”). In this project, I don't have such Stores, but I thought I'd mention them anyway. They handle pagination.
They normally respond to just a few actions (e.g. REQUEST_FEED, REQUEST_FEED_SUCCESS, REQUEST_FEED_ERROR).
// Paginated Stores keep their data like this
[7, 10, 5, ...]
Indexed List Stores are like List Stores but they define one-to-many relationship. For example, “user's subscribers”, “repository's stargazers”, “user's repositories”. They also handle pagination.
They also normally respond to just a few actions (e.g. REQUEST_USER_REPOS, REQUEST_USER_REPOS_SUCCESS, REQUEST_USER_REPOS_ERROR).
In most social apps, you'll have lots of these and you want to be able to quickly create one more of them.
// Indexed Paginated Stores keep their data like this
{
2: [7, 10, 5, ...],
6: [7, 1, 2, ...],
...
}
Note: these are not actual classes or something; it's just how I like to think about Stores.
I made a few helpers though.
StoreUtils
createStore
This method gives you the most basic Store:
createStore(spec) {
var store = merge(EventEmitter.prototype, merge(spec, {
emitChange() {
this.emit(CHANGE_EVENT);
},
addChangeListener(callback) {
this.on(CHANGE_EVENT, callback);
},
removeChangeListener(callback) {
this.removeListener(CHANGE_EVENT, callback);
}
}));
_.each(store, function (val, key) {
if (_.isFunction(val)) {
store[key] = store[key].bind(store);
}
});
store.setMaxListeners(0);
return store;
}
I use it to create all Stores.
isInBag, mergeIntoBag
Small helpers useful for Content Stores.
isInBag(bag, id, fields) {
var item = bag[id];
if (!bag[id]) {
return false;
}
if (fields) {
return fields.every(field => item.hasOwnProperty(field));
} else {
return true;
}
},
mergeIntoBag(bag, entities, transform) {
if (!transform) {
transform = (x) => x;
}
for (var key in entities) {
if (!entities.hasOwnProperty(key)) {
continue;
}
if (!bag.hasOwnProperty(key)) {
bag[key] = transform(entities[key]);
} else if (!shallowEqual(bag[key], entities[key])) {
bag[key] = transform(merge(bag[key], entities[key]));
}
}
}
PaginatedList
Stores pagination state and enforces certain assertions (can't fetch page while fetching, etc).
class PaginatedList {
constructor(ids) {
this._ids = ids || [];
this._pageCount = 0;
this._nextPageUrl = null;
this._isExpectingPage = false;
}
getIds() {
return this._ids;
}
getPageCount() {
return this._pageCount;
}
isExpectingPage() {
return this._isExpectingPage;
}
getNextPageUrl() {
return this._nextPageUrl;
}
isLastPage() {
return this.getNextPageUrl() === null && this.getPageCount() > 0;
}
prepend(id) {
this._ids = _.union([id], this._ids);
}
remove(id) {
this._ids = _.without(this._ids, id);
}
expectPage() {
invariant(!this._isExpectingPage, 'Cannot call expectPage twice without prior cancelPage or receivePage call.');
this._isExpectingPage = true;
}
cancelPage() {
invariant(this._isExpectingPage, 'Cannot call cancelPage without prior expectPage call.');
this._isExpectingPage = false;
}
receivePage(newIds, nextPageUrl) {
invariant(this._isExpectingPage, 'Cannot call receivePage without prior expectPage call.');
if (newIds.length) {
this._ids = _.union(this._ids, newIds);
}
this._isExpectingPage = false;
this._nextPageUrl = nextPageUrl || null;
this._pageCount++;
}
}
PaginatedStoreUtils
createListStore, createIndexedListStore, createListActionHandler
Makes creation of Indexed List Stores as simple as possible by providing boilerplate methods and action handling:
var PROXIED_PAGINATED_LIST_METHODS = [
'getIds', 'getPageCount', 'getNextPageUrl',
'isExpectingPage', 'isLastPage'
];
function createListStoreSpec({ getList, callListMethod }) {
var spec = {
getList: getList
};
PROXIED_PAGINATED_LIST_METHODS.forEach(method => {
spec[method] = function (...args) {
return callListMethod(method, args);
};
});
return spec;
}
/**
* Creates a simple paginated store that represents a global list (e.g. feed).
*/
function createListStore(spec) {
var list = new PaginatedList();
function getList() {
return list;
}
function callListMethod(method, args) {
return list[method].call(list, args);
}
return createStore(
merge(spec, createListStoreSpec({
getList: getList,
callListMethod: callListMethod
}))
);
}
/**
* Creates an indexed paginated store that represents a one-many relationship
* (e.g. user's posts). Expects foreign key ID to be passed as first parameter
* to store methods.
*/
function createIndexedListStore(spec) {
var lists = {};
function getList(id) {
if (!lists[id]) {
lists[id] = new PaginatedList();
}
return lists[id];
}
function callListMethod(method, args) {
var id = args.shift();
if (typeof id === 'undefined') {
throw new Error('Indexed pagination store methods expect ID as first parameter.');
}
var list = getList(id);
return list[method].call(list, args);
}
return createStore(
merge(spec, createListStoreSpec({
getList: getList,
callListMethod: callListMethod
}))
);
}
/**
* Creates a handler that responds to list store pagination actions.
*/
function createListActionHandler(actions) {
var {
request: requestAction,
error: errorAction,
success: successAction,
preload: preloadAction
} = actions;
invariant(requestAction, 'Pass a valid request action.');
invariant(errorAction, 'Pass a valid error action.');
invariant(successAction, 'Pass a valid success action.');
return function (action, list, emitChange) {
switch (action.type) {
case requestAction:
list.expectPage();
emitChange();
break;
case errorAction:
list.cancelPage();
emitChange();
break;
case successAction:
list.receivePage(
action.response.result,
action.response.nextPageUrl
);
emitChange();
break;
}
};
}
var PaginatedStoreUtils = {
createListStore: createListStore,
createIndexedListStore: createIndexedListStore,
createListActionHandler: createListActionHandler
};
createStoreMixin
A mixin that allows components to tune in to Stores they're interested in, e.g. mixins: [createStoreMixin(UserStore)].
function createStoreMixin(...stores) {
var StoreMixin = {
getInitialState() {
return this.getStateFromStores(this.props);
},
componentDidMount() {
stores.forEach(store =>
store.addChangeListener(this.handleStoresChanged)
);
this.setState(this.getStateFromStores(this.props));
},
componentWillUnmount() {
stores.forEach(store =>
store.removeChangeListener(this.handleStoresChanged)
);
},
handleStoresChanged() {
if (this.isMounted()) {
this.setState(this.getStateFromStores(this.props));
}
}
};
return StoreMixin;
}
So in Reflux the concept of the Dispatcher is removed and you only need to think in terms of data flow through actions and stores. I.e.
Actions <-- Store { <-- Another Store } <-- Components
Each arrow here models how the data flow is listened to, which in turn means that the data flows in the opposite direction. The actual figure for data flow is this:
Actions --> Stores --> Components
^ | |
+----------+------------+
In your use case, if I understood correctly, we need a openUserProfile action that initiates the user profile loading and switching the page and also some posts loading actions that will load posts when the user profile page is opened and during the infinite scroll event. So I'd imagine we have the following data stores in the application:
A page data store that handles switching pages
A user profile data store that loads the user profile when the page is opened
A posts list data store that loads and handles the visible posts
In Reflux you'd set it up like this:
The actions
// Set up the two actions we need for this use case.
var Actions = Reflux.createActions(['openUserProfile', 'loadUserProfile', 'loadInitialPosts', 'loadMorePosts']);
The page store
var currentPageStore = Reflux.createStore({
init: function() {
this.listenTo(openUserProfile, this.openUserProfileCallback);
},
// We are assuming that the action is invoked with a profileid
openUserProfileCallback: function(userProfileId) {
// Trigger to the page handling component to open the user profile
this.trigger('user profile');
// Invoke the following action with the loaded the user profile
Actions.loadUserProfile(userProfileId);
}
});
The user profile store
var currentUserProfileStore = Reflux.createStore({
init: function() {
this.listenTo(Actions.loadUserProfile, this.switchToUser);
},
switchToUser: function(userProfileId) {
// Do some ajaxy stuff then with the loaded user profile
// trigger the stores internal change event with it
this.trigger(userProfile);
}
});
The posts store
var currentPostsStore = Reflux.createStore({
init: function() {
// for initial posts loading by listening to when the
// user profile store changes
this.listenTo(currentUserProfileStore, this.loadInitialPostsFor);
// for infinite posts loading
this.listenTo(Actions.loadMorePosts, this.loadMorePosts);
},
loadInitialPostsFor: function(userProfile) {
this.currentUserProfile = userProfile;
// Do some ajax stuff here to fetch the initial posts then send
// them through the change event
this.trigger(postData, 'initial');
},
loadMorePosts: function() {
// Do some ajaxy stuff to fetch more posts then send them through
// the change event
this.trigger(postData, 'more');
}
});
The components
I'm assuming you have a component for the whole page view, the user profile page and the posts list. The following needs to be wired up:
The buttons that opens up the user profile need to invoke the Action.openUserProfile with the correct id during it's click event.
The page component should be listening to the currentPageStore so it knows which page to switch to.
The user profile page component needs to listen to the currentUserProfileStore so it knows what user profile data to show
The posts list needs to listen to the currentPostsStore to receive the loaded posts
The infinite scroll event needs to call the Action.loadMorePosts.
And that should be pretty much it.
Disclaimer: I tried to make a jsfiddle of this, but without a public source for the RESTAdapter, I couldn't really make it work.
I have a model with a hasMany array of child models. I need to add a new model to this child array and save to the server:
App.FooModel = DS.Model.extend({
'name': DS.attr('string'),
'bars': DS.hasMany('App.BarModel')
});
App.BarModel = DS.Model.extend({
'name': DS.attr('string'),
});
App.ApplicationController = Ember.Controller.extend({
init: function() {
var foo = App.FooModel.find(101); // -- currently has bars[201, 202, 203]
var newBar = loadFixture( App.BarModel, 204 );
var self = this;
setTimeout( function() { // -- just to be sure our models are loaded before we try this
// foo.currentState: saved
foo.get('bars').addObject(newBar);
// foo.currentState: saved
foo.store.commit(); // -- nothing happens
}, 1000);
}
});
App = Ember.Application.create({
store: DS.Store.create({
revision: 11
})
});
But nothing happens. My parent model doesn't get marked as dirty, so the store never attempts a commit. Is there a different way I should be adding this relationship to the parent? Is this a bug?
Current Workaround:
foo.get('bars').addObject(newBar);
var save = foo.get('name');
foo.set('name', (save + '!'));
foo.set('name', save); // -- this marks our record as dirty, so a save will actually happen
foo.store.commit();
Edit 1: I'm aware that ember-data will only serialize this data if it was embedded to begin with (https://stackoverflow.com/a/15145803/84762), but I have overridden my serializer to handle this. The issue I'm having is that the store never even attempts to save this change so we never even get to the serializer.
Edit 2: I suspect this might have something to do with this bug, but at the same that would mean this wouldn't work for anyone and I have a hard time believing no one else has run into this, yet.
It looks like you're Modeling a one to many relationship yet you didn't include the belongsTo option on App.BarModel. Check this link out:
http://emberjs.com/guides/models/defining-models/#toc_one-to-many
App.Post = DS.Model.extend({
comments: DS.hasMany('App.Comment')
});
App.Comment = DS.Model.extend({
post: DS.belongsTo('App.Post')
});
For what I understand, you did not use the embedded feature of relationship but overrided your serializer to handle the serialization of bars objects into foo object.
I think your bug probably came from here : if your relation is not embedded there is no reason to mark the foo object dirty as when you add an object to his bars association what should change is usually a key foo_id of the bar object you added, then there is no changes of the foo object to send to the API.
I'm using ember.js RC1 + ember-data rev 11 (but I also need some plain ajax for configuration like models). I want to loop over a simple objects list and display the records (note -here I create just a basic array)
The content I have bound has the following custom find method defined
App.Foo = DS.Model.extend({
name: DS.attr('string')
}).reopenClass({
records: [],
all: function() {
return this.records;
},
find: function() {
var self = this;
$.getJSON('/api/foo/', function(response) {
response.forEach(function(data) {
//say I want to kill everything in the array here for some strange reason...
self.records = [];
//the template still shows the record ... not an empty list ?
}, this);
});
return this.records;
}
});
My other model uses this directly
App.Related = DS.Model.extend({
listings: function() {
return App.Foo.find();
}.property()
});
Now inside my template
{{#each foo in related.listings}}
{{foo.name}}<br />
{{/each}}
The list loads up with whatever I put in the array by default (say I add a simple object using createRecord like so)
add: function(record) {
this.records.addObject(App.Foo.createRecord(record));
},
and when the template is rendered I see anything listed here... but as I put in the comments above, if I decide to remove records or null out the list that is bound it doesn't seem to reflect this in any way.
Is it possible to bind a simple array as I have and yet remove items from it using something basic such as splice? or even a drastic self.records = []; ?
self.records.splice(i, 1);
Even when I query the client manually after the splice or empty work it returns 0
console.log(App.Foo.all().get('length'));
Initially I see records, but then I see they are gone (yet the html doesn't change)
I understood your question this way, that the following remark is the point your are struggling with:
response.forEach(function(data) {
//say I want to kill everything in the array here for some strange reason...
self.records = [];
//the template still shows the record ... not an empty list ?
}, this);
You are wondering, why your template is showing no empty list? It's because you did not tell Ember when to update the template. You can tell Ember this way:
App.Related = DS.Model.extend({
listings: function() {
return App.Foo.find();
}.property("App.Foo.records.#each")
});
Now Ember knows, whenever something is added or removed from your array, it should update the listings property of your model. And therefore it knows that your view needs rerendering.
One additional remark to the orignal question regarding "simple javascript arrays". When you use Ember, you actually do not instantiate simple js arrays. When you declare:
var a = []; // is the same as -> var a = Ember.A();
Ember does some magic and wraps in an enhanced ember version of an array (Ember.NativeArray), which enables you to use such property dependency declarations mentioned above. This enables Ember to use ArrayObservers on those arrays, although they may feel like a plain JS Array.
You need to use the set method when you modify properties and get when you return them, or else Ember won't be able to do its magic and update the template.
In your case, there is an additional problem, which is that in find(), you return a reference to records before your asynchronous getJSON call replaces it with a new empty array. The calling method will never see the new array of records. You probably want to use clear() instead.
Your model should look something like this:
App.Foo = DS.Model.extend({
name: DS.attr('string')
}).reopenClass({
records: [],
all: function() {
// can't use 'this.get(...)' within a class method
return Ember.get(this, 'records');
},
findAll: function() {
var records = Ember.get(this, 'records');
$.getJSON('/api/foo/', function(response) {
records.clear();
// in this case my json has a 'foos' root
response.foos.forEach(function(json) {
this.add(json);
}, this);
}, this);
// this gets updated asynchronously
return records;
},
add: function(json) {
// in order to access the store within a
// class method, I cached it at App.store
var store = App.get('store');
store.load(App.Foo, json);
var records = Ember.get(this, 'records');
records.addObject(App.Foo.find(json.id));
}
});
Note that the addObject() method respects observers, so the template updates as expected. removeObject() is the corresponding binding-aware method to remove an element.
Here's a working jsfiddle.
See JsFiddle here http://jsfiddle.net/WtgbV/2/
In words: I have some ajax call, and in the server's response I get some array of items (Items in knockout viewmodel)
I need to know that property name was changed in element with id==2 etc to save changes automatically on server (via POST request)
What is the simplest/easiest way to track changes in each element in Items array?
I co-wrote a component called DirtyFlag that detects changes in Knockout observables (or a set of them). You can grab in from my library called KoLite that you can grab off NuGet or GitHub.
https://github.com/CodeSeven/KoLite
https://nuget.org/packages/KoLite
dirtyFlag
// Your model
var Person = function () {
var self = this;
self.id = ko.observable();
self.firstName = ko.observable().extend({ required: true });
self.lastName = ko.observable().extend({ required: true });
self.dirtyFlag = new ko.DirtyFlag([self.firstName,self.lastName]);
return self;
};
Hook these into your viewmodel to detect if there were changes ...
//Property on your view model. myPerson is an instance of Person.
//Did it Change?
isDirty = ko.computed(function () {
return myPerson().dirtyFlag().isDirty();
}),
Then to resync the changes ...
//Resync Changes
dirtyFlag().reset();
Knockout has a built in PubSub system (used by their observables and other core elements).
You could make use of this system by extending each of your properties to publish an event on a certain topic after being edited.
You'd then need to have a subscription on this topic so you can track changes in the data.
Take a look at this excellent post
You can easily achieve this by providing your own mapping. The following is a very basic example, just to show you what the PubSub system could do for you. See example.
If I may give you a hint, it's might be a better idea to not save per property but to detect changes and do an autosave of the whole array after a certain period.
So each value will publish a 'change event' on the topic and each time you receive a message on a topic the timeOut will be reset. After timeout expires you can save changes in the BE.
in backbone we have an app that uses an event Aggregator, located on the window.App.Events
now, in many views, we bind to that aggregator, and i manually wrote a destroy function on a view, which handles unbinding from that event aggregator and then removing the view. (instead of directly removing the view).
now, there were certain models where we needed this functionality as well, but i can't figure out how to tackle it.
certain models need to bind to certain events, but maybe i'm mistaken but if we delete a model from a collection it stays in memory due to these bindings to the event aggregator which are still in place.
there isn't really a remove function on a model, like a view has.
so how would i tacke this?
EDIT
on request, some code example.
App = {
Events: _.extend({}, Backbone.Events)
};
var User = Backbone.Model.extend({
initialize: function(){
_.bindAll(this, 'hide');
App.Events.bind('burglar-enters-the-building', this.hide);
},
hide: function(burglarName){
this.set({'isHidden': true});
console.warn("%s is hiding... because %s entered the house", this.get('name'), burglarName);
}
});
var Users = Backbone.Collection.extend({
model: User
});
var House = Backbone.Model.extend({
initialize: function(){
this.set({'inhabitants': new Users()});
},
evacuate: function(){
this.get('inhabitants').reset();
}
});
$(function(){
var myHouse = new House({});
myHouse.get('inhabitants').reset([{id: 1, name: 'John'}, {id: 1, name: 'Jane'}]);
console.log('currently living in the house: ', myHouse.get('inhabitants').toJSON());
App.Events.trigger('burglar-enters-the-building', 'burglar1');
myHouse.evacuate();
console.log('currently living in the house: ', myHouse.get('inhabitants').toJSON());
App.Events.trigger('burglar-enters-the-building', 'burglar2');
});
view this code in action on jsFiddle (output in the console): http://jsfiddle.net/saelfaer/szvFY/1/
as you can see, i don't bind to the events on the model, but to an event aggregator.
unbinding events from the model itself, is not necessary because if it's removed nobody will ever trigger an event on it again. but the eventAggregator is always in place, for the ease of passing events through the entire app.
the code example shows, that even when they are removed from the collection, they don't live in the house anymore, but still execute the hide command when a burglar enters the house.
I see that even when the binding event direction is this way Object1 -> listening -> Object2 it has to be removed in order to Object1 lost any alive reference.
And seeing that listening to the Model remove event is not a solution due it is not called in a Collection.reset() call then we have two solutions:
1. Overwrite normal Collection cleanUp
As #dira sais here you can overwrite Collection._removeReference to make a more proper cleaning of the method.
I don't like this solutions for two reasons:
I don't like to overwrite a method that has to call super after it.
I don't like to overwrite private methods
2. Over-wrapping your Collection.reset() calls
Wich is the opposite: instead of adding deeper functionality, add upper functionality.
Then instead of calling Collection.reset() directly you can call an implementation that cleanUp the models before been silently removed:
cleanUp: function( data ){
this.each( function( model ) { model.unlink(); } );
this.reset( data );
}
A sorter version of your code can looks like this:
AppEvents = {};
_.extend(AppEvents, Backbone.Events)
var User = Backbone.Model.extend({
initialize: function(){
AppEvents.on('my_event', this.listen, this);
},
listen: function(){
console.log("%s still listening...", this.get('name'));
},
unlink: function(){
AppEvents.off( null, null, this );
}
});
var Users = Backbone.Collection.extend({
model: User,
cleanUp: function( data ){
this.each( function( model ) { model.unlink(); } );
this.reset( data );
}
});
// testing
var users = new Users([{name: 'John'}]);
console.log('users.size: ', users.size()); // 1
AppEvents.trigger('my_event'); // John still listening...
users.cleanUp();
console.log('users.size: ', users.size()); // 0
AppEvents.trigger('my_event'); // (nothing)
Check the jsFiddle.
Update: Verification that the Model is removed after remove the binding-event link
First thing we verify that Object1 listening to an event in Object2 creates a link in the direction Obect2 -> Object1:
In the above image we see as the Model (#314019) is not only retained by the users collection but also for the AppEvents object which is observing. Looks like the event linking for a programmer perspective is Object that listen -> to -> Object that is listened but in fact is completely the opposite: Object that is listened -> to -> Object that is listening.
Now if we use the Collection.reset() to empty the Collection we see as the users link has been removed but the AppEvents link remains:
The users link has disappear and also the link OurModel.collection what I think is part of the Collection._removeReference() job.
When we use our Collection.cleanUp() method the object disappear from the memory, I can't make the Chrome.profile tool to explicitly telling me the object #314019 has been removed but I can see that it is not anymore among the memory objects.
I think the clean references process is a tricky part of Backbone.
When you remove a Model from a Collection the Collection takes care to unbind any event on the Model that the Collection its self is binding. Check this private Collection method.
Maybe you can use such a same technique in your Aggregator:
// ... Aggregator code
the_model.on( "remove", this.unlinkModel, this );
// ... more Aggregator code
unlinkModel: function( model ){
model.off( null, null, this );
}
This is in the case the direction of the binding is Aggregator -> Model. If the direction is the opposite I don't think you have to make any cleaning after Model removed.
Instead of wrapping Collection's reset with cleanUp as fguillen suggested, I prefer extending Collection and overriding reset directly. The reason is that
cleanUp takes effect only in client's code, but not in library(i.e. Backbone)'s.
For example, Collection.fetch may internally call Collection.reset. Unless modifying the Backbone's source code, we cannot unbind models from events(as in cleanUp) after calling Collection.fetch.
Basically, my suggested snippet is as follows:
var MyCollection = Backbone.Collection.extend({
reset: function(models, options) {
this.each(function(model) {
model.unlink(); // same as fguillen's code
});
Backbone.Collection.prototype.reset.apply(this, arguments);
}
});
Later, we can create new collections based on MyCollection.