Backbone-associations vs Backbone-relational - javascript

I have been looking for information about how can we build relationship in Backbone and came across following two nice plugins:
Backbone-relational
Backbone-associations
Both seems to have exist more than two years and seems to be stable. However, Backbone-relational outshines against Backbone-associations in following terms:
Providing almost all relations like one-to-one, one-to-many, many-to-one as we have in Database
Nice documentation (similar to Backbone.js) on first glance
Since, I haven't had time to go through both the plugins extensively, I would like to know from the experienced person following things:
Does both support AMD (like Requirejs)?
How easy to use the plugin with back-end-server like Ruby on Rails?
How easy to implement polymorphic relationship?

Biggest difference is that Backbone-relational fobids creating multiple instances of same model with identical ids. Consider:
let Person = Backbone.RelationalModel.extend({
relations: [
type: Backbone.HasMany,
key: 'likes_movies',
relatedModel: 'Movie'
]
});
let peter = new Person({
likes_movies: [{id: 1, title: 'Fargo'}, {id: 2, title: 'Adams Family'}]
);
let john = new Person({
likes_movies: [{id: 1, title: 'Fargo'}, {id: 2, title: 'Adams Family'}]
);
// Change title of one of Peter's movies
peter.get('likes_movies').get(1).set('title', 'Fargo 2 (Sequel)');
// John's corresponding movie will have changed its name
console.log(john.get('likes_movies').get(1)); // Fargo 2 (Sequel)
If rewritten for Backbone-associations, the movie title for John wouldn't have changed. This can be considered a feature or a disadvantage, depending on how you look at it.
Besides this, both libraries are very similar, except that development of Backbone-associations seems to have stopped almost a year ago.

Actually, based on both GitHub pulse (activity indicator), Backbone-relational community seems much more active.

I've studied both of these libraries when I was looking for something like EmberData or Restangular for Backbone.
Both of these libraries try to make up for the main Backbone weakness: handle properly Rest API Restful Resources.
In fact, Backbone promotes new models creation each time it need to be rendered (instead of reusing the instance used for another rendering somewhere else on the application).
Some inconsistencies then occurs because some model updates are not propagated everywhere on the web-application.
A cache of backbone model instances is then required..
Backbone Relational provides such a cache but Backbone Association doesn't.
Furthermore, both have reimplemented the code methods of Backbone (set, get, reset, trigger) so they are strongly coupled with Backbone.
That could complexify the Backbone library migrations especially if you use another MVC framework on top of Backbone (Marionnette, Thorax, Chaplin,...)
Backbone Association is lighter than Backbone Relational in terms of lines of code (800 vs 2000).
Backbone Association implementation is easier to debug because it directly manages relationships into the overloaded methods (set, get,...)
On the contrary, Backbone Relational relies on queues to synchronize relationship content with its internal store. That makes the debugging tricky...
Another lightweight (but less used) alternative is "Backbone SuperModel": http://pathable.github.io/supermodel/
This library is compound of 800 lines of code easier to understand than Backbone Relational (I was able to fix a little bug on it myself.)
It offers a backbone instance cache based on Backbone Collection
On my side,
I succeed to integrate the last one with RequireJs
I manage some polymorphic associations with it
A protocol between my web-application and my Java backend emerged
I succeed to upgrade Backbone and Marionette every time I need

Related

Single Object Model for AngularJS

I am experimenting few best practices in AngularJS specially on designing model. One true power in my opinion in AngularJS is
'When model changes view gets updated & vice versa'
. That leads to the obvious fact
'At any given time the model is the single source of truth for
application state'
Now, After reading various blog posts on designing the right model structure and I decided to use something like 'Single Object' approach. Meaning the whole app state is maintained in a single JavaScript object.
For example of a to-do application
$scope.appState = {
name: "toDoApp",
auth: {
userName: "John Doe",
email: "john#doe.com",
token: "DFRED%$%ErDEFedfereRWE2324deE$%^$%#423",
},
toDoLists: [
{ name: "Personal List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 1},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Work List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : -1},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Family List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 2},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 5}]
}
]
};
This object will go HUGE depending on the application complexity. Regarding this I have the below worries and marked them as questions.
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
With the above doubts I strongly find the below advantages.
Data has got neatly abstracted & well organized so that anytime it
can be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as 'Hibernate' option in desktops.
Model & View totally decoupled. For example, company A can write Model to maintain state and few obvious Controllers to change the
model and some basic view to interact with users. Now this company A
can invite other developer to openly write their own views and
requesting the company A for more controllers and REST methods. This
will empower LEAN development.
What if I start versioning this object to server and I can make a playback to the user in the SAME way he saw the website and can continue to work without hassle. This will work as a true back button for single page apps.
In my day job we use the "state in a single object" pattern in a big enterprise AngularJS application. So far I can only see benefits. Let me address your questions:
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
I see two main benefits:
1) DEBUGGING. When the application scales, the toughest question to answer is what is happening to my application right now? When I have all the application state in one object, I can just print it on the console at any point in time while the application is running.
That means it's much easier to understand what is happening when an error occurs, and you can even do it with the application on production, without the need to use debugging tools.
Write your code using mostly pure functions that process the state object (or parts of it), inject those functions and the state in the console, and you'll have the best debugging tool available. :)
2) SIMPLICITY. When you use a single object, you know very clearly on your application what changes state and what reads state. And they are completely separate pieces of code. Let me illustrate with an example:
Suppose you have a "checkout" screen, with a summary of the checkout, the freight options, and the payment options. If we implement Summary, Freight and PaymentOptions with separate and internal state, that means that every time the user changes one of the components, you have to explicitly change the others.
So, if the user changes the Freight option, you have to call the Summary in some way, to tell it to update its values. The same have to happen if the user selects a PaymentOption with a discount. Can you see the spaghetti code building?
When you use a centralized state object things get easier, because each component only interacts with the state object. Summary is just a pure function of the state. When the user selects a new freight or payment option, the state is updated. Then, the Summary is automatically updated, because the state has just changed.
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the object got changed and re-render whole screen? (This will lead to poor performance), If so what are the works around?
We encountered some performance problems when using this architecture with angular. Angular dirty checking works very well when you use watchers on objects, and not so well when you use expensive functions. So, what we usually do when finding a performance bottleneck is saving a function result in a "cache" object set on $scope. Every time the state changes we calculate the function again and save it to the cache. Then reference this cache on the view.
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we have right tools for complex JavaScript object manipulation like jQuery was king of DOM manipulator?
Yes! :) Since we have a big object as our state, each and every library written to manipulate objects can be used here.
All the benefits you mentioned are true: it makes it easier to take snapshots of the application, serialize it, implement undos...
I've written some more implementation details of the centralized state architecture in my blog.
Also look for information regarding frameworks that are based on the centralized state idea, like Om, Mercury and Morearty.
Is such approach advisable? What are the downsides and pitfalls I will
face when application starts to scale?
I'd say overall this is probably not a great idea since it creates major problems with maintainability of this massive object and introduces the possibility of side effects throughout the entire application making it hard to properly test.
Essentially you get no sort of encapsulation since any method anywhere in the app might have changed any part of the data model.
When small portion of object is updated say priority is increased will
angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
When a digest occurs in angular all the watches are processed to determine what has changed, all changes will cause the handlers for the watches to be called. So long as you are conscious of how many DOM elements you're creating and do a good job of managing that number performance isn't a huge issue, there are also options like bind-once to avoid having too many watchers if that becomes an issue (Chrome profiling tools are great for figuring out if you need to work on these problems and to find the correct targets for performance)
Now since the whole DOM got smoothly translated into one JavaScript
object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
You can still use jQuery but you would want to do any DOM manipulations within directives. This allows you to apply them within the view which is really the driver of the DOM. This keeps your controllers from being tied to the views and keeps everything testable. Angular includes jQLite a smaller derivative of jQuery but if you include jQuery before angular it will use that instead and you can use the full power of jQuery within Angular.
Data has got neatly abstracted & well organized so that anytime it can
be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as
'Hibernate' option in desktops.
This is true but I think it's better if you come up with a well defined save object that persists the information you need to restore state rather than storing the entire data model for all the parts of the app in one place. Changes to the model over time will cause problems with the saved versions.
Model & View totally decoupled. For example, company A can write Model
to maintain state and few obvious Controllers to change the model and
some basic view to interact with users. Now this company A can invite
other developer to openly write their own views and requesting the
company A for more controllers and REST methods. This will empower
LEAN development.
So long as you write your models in your controllers this still remains true and is an advantage. You can also write custom directives to encapsulate functionality and templates so you can greatly reduce the complexity of your code.
What if I start version control this object to server and I can make a
playback to the user in the SAME way he saw the website and can
continue to work without hassle. This will work as a true back button
for single page apps.
I think ngRoute or ui-router really already cover this for you and hooking into them to save the state is really a better route (no pun intended).
Also just my extended opinion here, but I think binding from the view to the model is one of the great things using Angular gives us. In terms of the most powerful part I think it's really in directives which allow you to extend the vocabulary of HTML and allow for massive code re-use. Dependency injection also deserves an honorable mention, but yeah all great features no matter how you rank 'em.
I don't see any advantages in this approach. How is one huge object more abstracted away than, say, application model (app name, version, etc), user model (credentials, tokens, other auth stuff), toDoList model (single object with name and collection of tasks).
Now regarding decoupling of view and model. Let's say you need a widget to display current user's name. In Single Object approach, your view would look something like this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{appState.auth.userName}}</span>
</div>
Compare with this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{user.userName}}</span>
</div>
Of course, you might argue, that it is up to UserWidgetController to provide access to user object, thus hidding structure of appState. But then again controller must be coupled to appState structure:
function UserWidgetCtrl($scope) {
$scope.user = $scope.appState.auth;
}
Compare with this:
function UserWidgetCtrl($scope, UserService) {
UserService.getUser().then(function(user) {
$scope.user = user;
})
}
In latter case, the controller does not get to decide, where user data comes from. In former case, any change in appState hierarchy means that either controllers or view will have to be updated. You could however keep single object behind the scenes, but access to separate parts (user, in this case) should be abstracted by dedicated services.
And don't forget, as your object structure goes, your $watch'es will get slower and consume more memory, especially with object equality flag turned on.

MVVM with Knockout.js

I'm trying to implement a MVVM based Single Page Application and currently am using the framework Knockout.js to handle the viewmodel/view portion of MVVM. I'm confused though, as every example I've looked at for implementing Knockout involves saving an entire viewmodel to the database. Aren't these examples missing a "model" step, where the viewmodel syncs with the data-layer model and the model does validation / Server synchronization.
I would like to have multiple different templates/views on the single page each with a different viewmodel. Another thing that I find is missing with knockout.js is synchronizing a single model (not viewmodel) across different views. I don't think it makes sense to have one giant viewmodel that every view shares, so I was thinking that each view would have its own viewmodel but each viewmodel would sync with the fields of just a few application-wide models that are needed for each view.
The page I'm working on fetches a giant model (30+ fields, multiple layers of parent/child relationships) and I think it makes sense to just have all of my viewmodels sync with this model. I have investigated Knockback.js (which combines knockout.js and backbone.js) however I ended up re-writing the majority of the functions such as fetch, set, save, because the page is getting data from an API (and I can't just sync an entire model back and forth with the server) so I decided against it.
visual example of my application:
(model layer)
M | M
(viewmodel/view layer) VM-V | VM-V | VM-V | VM-V
another example
An example model would be User = {firstName: "first", lastName: "last", ... }
one viewmodel only needs first name, another viewmodel only needs last name
ViewModelA={firstName: app.User.firstName()}ViewModelB={firstName: app.User.lastName()}
Is the only way to do this to define a pub/sub system for Model and Viewmodel changes? Is this even good/maintainable architecture? Am I missing a basic concept here? All advice welcome.
If I read this correctly, there's a lot of questions in here all focused on how to build a MVVM / SPA with Knockout. There are a few things to tackle, as you pointed out. One is how to communicate between viewmodel/view pairs.
Master ViewModel
One way to do that is to have a master viewmodel as the answer from #Tyrsius. Your shell could have a viewmodel that binds more available data. The master viewmodel could orchestrate the child view models, too. If you go this route then you have to be careful to bind the outer shell to the master viewmodel and the inner ones to specific HTML elements in the DOM. The master viewmodel could facilitate the communication between them if need be.
Decoupled View/ViewModel Pairs
Another option is to use viewmodel/view pairs and no master viewmodel. Each view is loaded into a region of the DOM and bound on its own. They act as separate units and are decoupled from one another. You could use pub/sub to then talk between the, but if all you need is a way for the data to be synched through observables, Knockout provides many options. The one I like is to have each viewmodel surface model objects. So a view has a viewmodel which surfaces data (from a model) that is specific for the view. So many viewmodels may surface the same model in different ways. So when a view updates a viewmodel property (that is in a model) it then ripples to any other loaded viewmodel that also uses the same model.
DataContext
Going a bit further, you could create a datacontext module that manages the data in the models. You ask the datacontext for a model (ex: list of customers) and the datacontext checks if it has them cahced already and if not, it goes and gets them from an ajax call. Either way that is abstracted from the view model and models. The datacontext gets the data and returns a model(s) to the viewmodel. This way you are very decoupled, yet you can share the data (your models) via the datacontext.
I could go on and on ... but please tell me if this is answering your question. If not, happy to answer any other specifics.
** Disclaimer: I'm building a Pluralsight course on SPA's (using Knockout and this strategy) :-)
This is a popular field of interest right now, so I expect you will get some better answers, but here goes.
The Model
Yes, you should absolutely have a server-side representation of the data, which is your model. What this is depends on your server, and your database. For MVC3, this is your entity model. For Django or Ruby, your will have defined db models as part of your db setup. This part is up to your specific technology. But agian, yes you should have a model, and the server should absolutely perform data-validation.
The Application (ViewModel)
It is recommended that your views each have their own viewmodel. Your page could then also have a viewmodel, an App Viewmodel if you will, that keeps track of all of them. If you go this route, the app viewmodel should be responsible for switching between the views, and implementing any other application level logic (like hash bashed navigation, another popular single page tool). This hierarchy is very important, but not always simple. It will come down to the specific requirements of your application. You are not limited to one, flat viewmodel. This is not the only possible method.
Ad Hoc Example:
​var ThingViewModel = function(name, data){
this.name = ko.observable(name);
//Additional viewmodel stuffs
};
var AppViewModel = function(initialData){
//Process initial data
this.thing = new ThingViewModel(someName, someData);
};
I am working on a similar project right now, purely for study (not a real world app), that is hosted here on GitHub, if you would like to take a look at some real exmaples. Note, the dev branch is quite a bit ahead of the master branch at the moment. I'm sure it contains some bad patterns (feel free to point them out, I am learning too), but you might be able to learn a few things from it anyway.
I have a similarly complex solution in which I am reworking a WPF application into a web version. The WPF version dealt with complex domain objects which it bound to views by way of presenter models.
In the web version I have implemented simplified and somewhat flattened server-side view models which are translated back and forth from / to domain objects using Automapper. Then those server-side view models are sent back and forth to the client as JSON and mapped into / onto corresponding Knockout view models (instantiable functions which each take responsibility for creating their children with sub-mapping options) using the mapping plugin.
When I need to save / validate my UI, I map all or part of my Knockout view model back to a plain Javascript object, post it as JSON, the MVC framework binds it back to server-side view models, these get Automapped back to domain objects, validated and possibly updated by our domain layer, and then a revised full or partial graph is returned and remapped.
At present I have only one main page where the Knockout action takes place but I anticipate that like you, I will end up with multiple contexts which need to deal with the same model (my domain objects) pulled as differing view models depending on what I'm doing with them.
I have structured the server side view model directories etc in anticipation of this, as well as the structure of my Knockout view models. So far this approach is working nicely. Hope this helps.
During a project i developed a framework (which uses KnockoutJS) which provides decoupled View/ViewModel pairs and allows to instantiate subviews in a view. The whole handling of the view and subview instantiation is provided by the framework. It works like MVVM with XAML in WPF.
Have a look at http://visto.codeplex.com

Does ember.js still support ObjectController? If not, what replaces it?

I'm trying to learn some Ember.js and while I realize everything is in flux and the moment, it seems that this bit of code from the Sproutcore 2 guides (which are linked to at the Ember.js github readme) doesn't work any longer:
App.userController = SC.ObjectController.create({
content: SC.Object.create({
firstName: "Albert",
lastName: "Hofmann",
posts: 25,
hobbies: "Riding bicycles"
})
});
Looking at the ember.js source, the only type of controller that seems to be supported is an arryay controller. Is there an established best practice for proxying between a single model object that is not part of an array/collection and a view? Or do people forego the proxying and simply set up bindings directly between the model and view objects? Thoughts?
There are plans to bring back ObjectController/ObjectProxy. Peter and I have started working on it here, but we need to add some lower level functionality to Ember before it can be fully supported.
Until then, you can use Ember.Object with a content property. You'll have to explicitly reference the content property in property paths (eg. App.userController.content). When ObjectController is finished you'll be able to switch your controllers to inherit from it instead and you can update your property paths to not explicitly reference content.
UPDATED: Yes, Ember.ObjectController is a first-class part of Ember and is most frequently used to proxy a model's properties for easy rendering by templates. See http://emberjs.com/api/classes/Ember.ObjectController.html for documentation.
It's in master now, see:
https://github.com/emberjs/ember.js/commit/c6954ba40ab9f007dd499634bfccf40fc31a73d7

Models vs Views in Backbone.js

From couple of days, I have started working/learning the backbone.js. I have read documentation on their site. I have also read few tutorials available here.
As per my understanding below are few major differences between Views and Models.
Only view has 'el'. Why it is not there in Model ?
Only Models have 'get','set', 'save' methods.
Only Models have functions like fetch, save, destroy, validate
methods,clear,has.
According Hello World examples here, View can also do the things Models can do.
Both have extend, render, initializer, getter setter methods.
Both can be converted into JSON using toJSON.
hence, I am confused between Models and Views. When to use each one ?
my question is.. what is practical difference between Models and Views? What are different situations to use Models/Views ? What should be appropriate to use for displaying(render)?
Can anyone good # Backbone.js explain with practical scenario ?
Your help will make my understanding much clear.
Model and Views are not Backbone terms. You can read about MVC paradigm first.
Model contain data and logic for data manipulations. View describes how this data should be displayed.
Hence only View have 'el' - because this it is used while displaying data.
Getters and setters are in model according to MVC paradigm.

Ajax Architecture - MVC? Other?

Hey all, I'm looking at building an ajax-heavy site, and I'm trying to spend some time upfront thinking through the architecture.
I'm using Code Igniter and jquery. My initial thought process was to figure out how to replicate MVC on the javascript side, but it seems the M and the C don't really have much of a place.
A lot of the JS would be ajax calls BUT I can see it growing beyond that, with plenty of DOM manipulation, as well as exploring the HTML5 clientside database. How should I think about architecting these files? Does it make sense to pursue MVC? Should I go the jquery plugin route somehow? I'm lost as to how to proceed and I'd love some tips. Thanks all!
I've made an MVC style Javascript program. Complete with M and C. Maybe I made a wrong move, but I ended up authoring my own event dispatcher library. I made sure that the different tiers only communicate using a message protocol that can be translated into pure JSON objects (even though I don't actually do that translation step).
So jquery lives primarily in the V part of the MVC architecture. In the M, and C side, I have primarily code which could run in the stand alone CLI version of spidermonkey, or in the serverside rhino implementation of javascript, if necessary. In this way, if requirements change later, I can have my M and C layers run on the serverside, communicating via those json messages to the V side in the browser. It would only require some modifications to my message dispatcher to change this though. In the future, if browsers get some peer to peer style technologies, I could get the different teirs running in different browsers for instance.
However, at the moment, all three tiers run in a single browser. The event dispatcher I authored allows multicast messages, so implementing an undo feature now will be as simple as creating a new object that simply listens to the messages that need to be undone. Autosaving state to the server is a similar maneuver. I'm able to do full detailed debugging and profiling inside the event dispatcher. I'm able to define exactly how the code runs, and how quickly, when, and where, all from that central bit of code.
Of course the main drawback I've encountered is I haven't done a very good job of managing the complexity of the thing. For that, if I had it all to do over, I would study very very carefully the "Functional Reactive" paradigm. There is one existing implementation of that paradigm in javascript called flapjax. I would ensure that the view layer followed that model of execution, if not used specifically the flapjax library. (i'm not sure flapjax itself is such a great execution of the idea, but the idea itself is important).
The other big implementation of functional reactive, is quartz composer, which comes free with apple's developer tools, (which are free with the purchase of any mac). If that is available to you, have a close look at that, and how it works. (it even has a javascript patch so you can prototype your application with a prebuilt view layer)
The main takaway from the functional reactive paradigm, is to make sure that the view doesn't appear to maintain any kind of state except the one you've just given it to display. To put it in more concrete terms, I started out with "Add an object to the screen" "remove an object from the screen" type messages, and I'm now tending more towards "display this list of objects, and I'll let you figure out the most efficient way to get from the current display, to what I now want you to display". This has eliminated a whole host of bugs having to do with sloppily managed state.
This also gets around another problem I've been having with bugs caused by messages arriving in the wrong order. That's a big one to solve, but you can sidestep it by just sending in one big package the final desired state, rather than a sequence of steps to get there.
Anyways, that's my little rant. Let me know if you have any additional questions about my wartime experience.
At the risk of being flamed I would suggest another framework besides JQuery or else you'll risk hitting its performance ceiling. Its ala-mode plugins will also present a bit of a problem in trying to separate you M, V and C.
Dojo is well known for its Data Stores for binding to server-side data with different transport protocols, and its object oriented, lighting fast widget system that can be easily extended and customized. It has a style that helps guide you into clean, well-divisioned code – though it's not strictly MVC. That would require a little extra planning.
Dojo has a steeper learning curve than JQuery though.
More to your question, The AJAX calls and object (or Data Store) that holds and queries this data would be your Model. The widgets and CSS would be your View. And the Controller would basically be your application code that wires it all together.
In order to keep them separate, I'd recommend a loosely-coupled event-driven system. Try to directly access objects as little as possible, keeping them "black boxed" and get data via custom events or pub/sub topics.
JavaScriptMVC (javascriptmvc.com) is an excellent choice for organizing and developing a large scale JS application.
The architecture design is very practical. There are 4 things you will ever do with JavaScript:
Respond to an event
Request Data / Manipulate Services (Ajax)
Add domain specific information to the ajax response.
Update the DOM
JMVC splits these into the Model, View, Controller pattern.
First, and probably the most important advantage, is the Controller. Controllers use event delegation, so instead of attaching events, you simply create rules for your page. They also use the name of the Controller to limit the scope of what the controller works on. This makes your code deterministic, meaning if you see an event happen in a '#todos' element you know there has to be a todos controller.
$.Controller.extend('TodosController',{
'click' : function(el, ev){ ... },
'.delete mouseover': function(el, ev){ ...}
'.drag draginit' : function(el, ev, drag){ ...}
})
Next comes the model. JMVC provides a powerful Class and basic model that lets you quickly organize Ajax functionality (#2) and wrap the data with domain specific functionality (#3). When complete, you can use models from your controller like:
Todo.findAll({after: new Date()}, myCallbackFunction);
Finally, once your todos come back, you have to display them (#4). This is where you use JMVC's view.
'.show click' : function(el, ev){
Todo.findAll({after: new Date()}, this.callback('list'));
},
list : function(todos){
$('#todos').html( this.view(todos));
}
In 'views/todos/list.ejs'
<% for(var i =0; i < this.length; i++){ %>
<label><%= this[i].description %></label>
<%}%>
JMVC provides a lot more than architecture. It helps you in ever part of the development cycle with:
Code generators
Integrated Browser, Selenium, and Rhino Testing
Documentation
Script compression
Error reporting
I think there is definitely a place for "M" and "C" in JavaScript.
Check out AngularJS.
It helps you with your app structure and strict separation between "view" and "logic".
Designed to work well together with other libs, especially jQuery.
Full testing environment (unit, e2e) + dependency injection included, so testing is piece of cake with AngularJS.
There are a few JavaScript MVC frameworks out there, this one has the obvious name:
http://javascriptmvc.com/

Categories