I am experimenting few best practices in AngularJS specially on designing model. One true power in my opinion in AngularJS is
'When model changes view gets updated & vice versa'
. That leads to the obvious fact
'At any given time the model is the single source of truth for
application state'
Now, After reading various blog posts on designing the right model structure and I decided to use something like 'Single Object' approach. Meaning the whole app state is maintained in a single JavaScript object.
For example of a to-do application
$scope.appState = {
name: "toDoApp",
auth: {
userName: "John Doe",
email: "john#doe.com",
token: "DFRED%$%ErDEFedfereRWE2324deE$%^$%#423",
},
toDoLists: [
{ name: "Personal List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 1},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Work List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : -1},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Family List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 2},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 5}]
}
]
};
This object will go HUGE depending on the application complexity. Regarding this I have the below worries and marked them as questions.
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
With the above doubts I strongly find the below advantages.
Data has got neatly abstracted & well organized so that anytime it
can be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as 'Hibernate' option in desktops.
Model & View totally decoupled. For example, company A can write Model to maintain state and few obvious Controllers to change the
model and some basic view to interact with users. Now this company A
can invite other developer to openly write their own views and
requesting the company A for more controllers and REST methods. This
will empower LEAN development.
What if I start versioning this object to server and I can make a playback to the user in the SAME way he saw the website and can continue to work without hassle. This will work as a true back button for single page apps.
In my day job we use the "state in a single object" pattern in a big enterprise AngularJS application. So far I can only see benefits. Let me address your questions:
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
I see two main benefits:
1) DEBUGGING. When the application scales, the toughest question to answer is what is happening to my application right now? When I have all the application state in one object, I can just print it on the console at any point in time while the application is running.
That means it's much easier to understand what is happening when an error occurs, and you can even do it with the application on production, without the need to use debugging tools.
Write your code using mostly pure functions that process the state object (or parts of it), inject those functions and the state in the console, and you'll have the best debugging tool available. :)
2) SIMPLICITY. When you use a single object, you know very clearly on your application what changes state and what reads state. And they are completely separate pieces of code. Let me illustrate with an example:
Suppose you have a "checkout" screen, with a summary of the checkout, the freight options, and the payment options. If we implement Summary, Freight and PaymentOptions with separate and internal state, that means that every time the user changes one of the components, you have to explicitly change the others.
So, if the user changes the Freight option, you have to call the Summary in some way, to tell it to update its values. The same have to happen if the user selects a PaymentOption with a discount. Can you see the spaghetti code building?
When you use a centralized state object things get easier, because each component only interacts with the state object. Summary is just a pure function of the state. When the user selects a new freight or payment option, the state is updated. Then, the Summary is automatically updated, because the state has just changed.
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the object got changed and re-render whole screen? (This will lead to poor performance), If so what are the works around?
We encountered some performance problems when using this architecture with angular. Angular dirty checking works very well when you use watchers on objects, and not so well when you use expensive functions. So, what we usually do when finding a performance bottleneck is saving a function result in a "cache" object set on $scope. Every time the state changes we calculate the function again and save it to the cache. Then reference this cache on the view.
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we have right tools for complex JavaScript object manipulation like jQuery was king of DOM manipulator?
Yes! :) Since we have a big object as our state, each and every library written to manipulate objects can be used here.
All the benefits you mentioned are true: it makes it easier to take snapshots of the application, serialize it, implement undos...
I've written some more implementation details of the centralized state architecture in my blog.
Also look for information regarding frameworks that are based on the centralized state idea, like Om, Mercury and Morearty.
Is such approach advisable? What are the downsides and pitfalls I will
face when application starts to scale?
I'd say overall this is probably not a great idea since it creates major problems with maintainability of this massive object and introduces the possibility of side effects throughout the entire application making it hard to properly test.
Essentially you get no sort of encapsulation since any method anywhere in the app might have changed any part of the data model.
When small portion of object is updated say priority is increased will
angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
When a digest occurs in angular all the watches are processed to determine what has changed, all changes will cause the handlers for the watches to be called. So long as you are conscious of how many DOM elements you're creating and do a good job of managing that number performance isn't a huge issue, there are also options like bind-once to avoid having too many watchers if that becomes an issue (Chrome profiling tools are great for figuring out if you need to work on these problems and to find the correct targets for performance)
Now since the whole DOM got smoothly translated into one JavaScript
object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
You can still use jQuery but you would want to do any DOM manipulations within directives. This allows you to apply them within the view which is really the driver of the DOM. This keeps your controllers from being tied to the views and keeps everything testable. Angular includes jQLite a smaller derivative of jQuery but if you include jQuery before angular it will use that instead and you can use the full power of jQuery within Angular.
Data has got neatly abstracted & well organized so that anytime it can
be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as
'Hibernate' option in desktops.
This is true but I think it's better if you come up with a well defined save object that persists the information you need to restore state rather than storing the entire data model for all the parts of the app in one place. Changes to the model over time will cause problems with the saved versions.
Model & View totally decoupled. For example, company A can write Model
to maintain state and few obvious Controllers to change the model and
some basic view to interact with users. Now this company A can invite
other developer to openly write their own views and requesting the
company A for more controllers and REST methods. This will empower
LEAN development.
So long as you write your models in your controllers this still remains true and is an advantage. You can also write custom directives to encapsulate functionality and templates so you can greatly reduce the complexity of your code.
What if I start version control this object to server and I can make a
playback to the user in the SAME way he saw the website and can
continue to work without hassle. This will work as a true back button
for single page apps.
I think ngRoute or ui-router really already cover this for you and hooking into them to save the state is really a better route (no pun intended).
Also just my extended opinion here, but I think binding from the view to the model is one of the great things using Angular gives us. In terms of the most powerful part I think it's really in directives which allow you to extend the vocabulary of HTML and allow for massive code re-use. Dependency injection also deserves an honorable mention, but yeah all great features no matter how you rank 'em.
I don't see any advantages in this approach. How is one huge object more abstracted away than, say, application model (app name, version, etc), user model (credentials, tokens, other auth stuff), toDoList model (single object with name and collection of tasks).
Now regarding decoupling of view and model. Let's say you need a widget to display current user's name. In Single Object approach, your view would look something like this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{appState.auth.userName}}</span>
</div>
Compare with this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{user.userName}}</span>
</div>
Of course, you might argue, that it is up to UserWidgetController to provide access to user object, thus hidding structure of appState. But then again controller must be coupled to appState structure:
function UserWidgetCtrl($scope) {
$scope.user = $scope.appState.auth;
}
Compare with this:
function UserWidgetCtrl($scope, UserService) {
UserService.getUser().then(function(user) {
$scope.user = user;
})
}
In latter case, the controller does not get to decide, where user data comes from. In former case, any change in appState hierarchy means that either controllers or view will have to be updated. You could however keep single object behind the scenes, but access to separate parts (user, in this case) should be abstracted by dedicated services.
And don't forget, as your object structure goes, your $watch'es will get slower and consume more memory, especially with object equality flag turned on.
Related
I'm a computer science tutor, with a lot of experience teaching Python and Java but without a lot of web experience. I'm now working with a high school student who has been assigned a project that he wishes to implement in HTML and JS. So this is a good chance for me to learn more about this type of application.
It's an application for taking food orders on the web and showing an illustration of your order. You choose a main dish and any custom alterations. Then you choose another course of the meal, maybe a salad and any alterations. And so on. The first page it shows you will show an empty plate. You choose a course to customize and it will take you to a page that shows options, then another page with further options, and so forth. Each time you are finished configuring an option, it will take you back to the starting page and show a picture of the meal so far on the plate (which was formerly empty).
The main part I'm unfamiliar with is handling persistent state. Although each page will have a unique structure of images (possibly a canvas) and buttons, it will have to remember the order as configured as it loads each page.
I'm wondering what a simple way of handling this is. This is a high school student with very little prior programming experience, and I'm allowed to help her, but it has to be within her grasp to understand overall.
Perhaps sessionStorage is the best bet?
Regarding possible duplication of Persist variables between page loads, my needs are a lot more complex than that question--I need to store more than a single global variable, and I may need a framework to simplify this. In particular, I'm interested in doing this a way that is simple enough for a high school student to understand so that he can implement some of it himself (at least some of it has to be his own work). I don't know if using a framework will make the job simpler (that would be good) or whether it will require more effort to understand the framework (especially relative to an inexperienced student -- not good).
Perhaps sessionStorage is the best bet?
If you want the state to be automatically expired when the browser session ends. Otherwise, use localStorage, which will persist longer than that.
It's important to remember that web storage (both sessionStorage and localStorage) only stores strings, so a common technique is to use it to store JSON that you parse on load, so you can have complex state. E.g.:
// On load
var state = JSON.parse(localStorage.getItem("state-key"));
if (!state) {
// None was stored, create some
state = {/*...default state...*/};
}
// On save
localStorage.setItem("state-key", JSON.stringify(state));
Note that this bit:
var state = JSON.parse(localStorage.getItem("state-key"));
if (!state) {
relies on the fact that getItem returns null if there is no stored data with that key, and JSON.parse coerces its argument to string. null coerces to "null" which will be parsed back into null. :-)
Another way to do it is to use ||:
var state = JSON.parse(localStorage.getItem("state-key")) || {
// default state here
};
I am using reactjs and the flux architecture in a project I'm working on. I am a bit puzzled by how to break up nested data correctly into stores and why I should split up my data into multiple stores.
To explain the problem I'll use this example:
Imagine a Todo application where you have Projects. Each project has tasks and each task can have notes.
The application uses a REST api to retrieve the data, returning the following response:
{
projects: [
{
id: 1,
name: "Action Required",
tasks: [
{
id: 1,
name: "Go grocery shopping",
notes: [
{
id: 1,
name: "Check shop 1"
},
{
id: 2,
name: "Also check shop 2"
}
]
}
]
},
]
}
The fictive application's interface displays a list of projects on the left and when you select a project, that project becomes active and its tasks are displayed on the right. When you click a task you can see its notes in a popup.
What I would do is use 1 single store, the "Project Store". An action does the request to the server, fetches the data and instructs the store to fill itself with the new data. The store internally saves this tree of entities (Projects -> Tasks -> Notes).
To be able to show and hide tasks based on which project is selected I'd also keep a variable in the store, "activeProjectId". Based on that the view can get the active project, its tasks and render them.
Problem solved.
However: after searching a bit online to see if this is a good solution I see a lot of people stating that you should use a separate store per entity.
This would mean:
A ProjectStore, TaskStore and NoteStore. To be able to manage associations I would possibly also need a "TasksByProjectStore" and a "NotesByTaskStore".
Can someone please explain why this would be better? The only thing I see is a lot of overhead in managing the stores and the data flow.
There are pro's and cons to use one store or different stores. Some implementations of flux specifically favour one store to rule them all, so to speak, while others also facilitate multiple stores.
Whether one store or multiple stores suit your needs, depend on a) what your app does, and b) which future developments or changes you expect. In a nutshell:
One store is better if your key concern is the dependencies between your nested entities. And if you are less worried about dependencies between single entity relation between server-store-component. One store is great if e.g. you want to manage stats on project level about underlying tasks and notes. Many parent-child-like relationships and all-in-one data fetching form server favour one store solution.
Multiple stores better if your key concern is dependencies between single entity connections between server-store-component. Weak entity-to-entity relationships and independent single entity server fetches and updates favour multiple stores.
In your case: my bet would be that one store is better. You have evident parent-child relationship, and get all project data at once from server.
The somewhat longer answer:
One store:
Great to minimize overhead of managing multiple stores.
It works well if your top view component is the only stateful component, and gets all data from the store, and distributes details to stateless children.
However, the need to manage dependencies between your entities does not simply go away: instead of managing them between different stores, you need to manage them inside the single store. Which therefore gets bigger (more lines of code).
Also, in a typical flux setup, each store emits a single 'I have changed' event, and leaves it up to the component(s) to figure out what changed and if they need top re-render. So if you have many nested entities, and one of the entities receives many updates from the server, then your superstore emits many changes, which might trigger a lot of unnecessary updates of the entire structure and all components. Flux-react can handle a lot, and the detail-changed-update-everything is what it handles well, but it may not suit everyones needs (I had to abandon this approach when it screwed up my transitions between states in one project).
Multiple stores:
more overhead, yes, but for some projects you get returns as well
if you have a close coupling between server data and components, with the flux store in between, it is easier to separate concerns in separate stores
if e.g. you are expecting many changes and updates to your notes data structure, than it is easier to have a stateful component for notes, that listens to the notes store, which in turn handles notes data updates from the server. When processing changes in your notes structure, you can focus on notes store only, without figuring out how notes are handled inside some big superstore.
I have an app in React which, at a basic level, is a document that displays data from an arbitrary number of 2nd-level "child" resources, and each child displays data from an arbitrary number of 3rd-level "grandchild" resources, and each grandchild displays data from an arbitrary number of 4th-level "great-grandchild" resources.
This is the basic JSON structure retrieved from my API server:
{ id: 1,
children: [
{ id: 1,
grandchildren: [
{ id: 1,
greatgrandchildren: [{ id: 1 }, { id: 2 }, ...]
},
...
]
},
...
]
}
The objects at each level have a bunch of additional properties I haven't shown here for simplicity.
As per the recommended React way of doing things, this object is retrieved and set as state at the top-level component of my app when it loads, then the relevant data is passed down as props to the children in order to build out the component tree, which mirrors this structure.
This is fine, however I need to be able to do CRUD (create/read/update/delete) operations on each resource, and it's turning out to be a pain because of the need to pass the entire data object to setState() when I'm just trying to modify a small part of it. It's not so bad at the top or 2nd levels, but anything past that and things get unwieldy quickly due to the need to iterate, fetch the object I want based on id, change it, then build a copy of the entire structure with just that bit changed. The React addons provide an update() function, which is useful, but really only helps with the last step - I still have to deal with nested iteration and rebuilding the relevant array outside that.
Additional complexity is added because my goal is to optimistically update the view with temp data, then either update it again with the proper data that gets returned from the server (on success) or revert (on fail). This means I need to be careful to keep a copy of the old data/state around without mutating it (i.e. sharing refs).
Finally, I currently have a callback method for each CRUD action for each level defined on my top-level component (12 methods altogether). This seems excessive, but they all need the ability to call this.setState(), and I'm finding it hard to work out how to refactor the commonality among them. I already have a separate API object that does the actual Ajax calls.
So my question is: is there a better React-suitable approach for dealing with CRUD operations and manipulating the data for them with a nested data structure like I have, in a way that permits optimistic updates?
On the surface it looks like the flux architecture might be the answer you are looking for. In this case you would have a Store for each type of resource (or maybe one Store for all of it depending on the structure of the data) and have the views listen for the changes in the data that they care about.
After a bit of research I've come to understand that the state-changing problem is one of the reasons why immutability libraries (like immutable.js and mori) seem to be a hot topic in the React community, as they facilitate modifying small parts of deeply nested structures while keeping the original copy untouched. After some refactoring I was able to make better use of the React.addons.update() helper and shrink things down a bit, but I think using one of these libraries would be the next logical step.
As for the structure/organization stuff, it seems Flux (or another architecture like MV*) exists to solve that problem, which makes sense because React is just a view library.
I have been looking for information about how can we build relationship in Backbone and came across following two nice plugins:
Backbone-relational
Backbone-associations
Both seems to have exist more than two years and seems to be stable. However, Backbone-relational outshines against Backbone-associations in following terms:
Providing almost all relations like one-to-one, one-to-many, many-to-one as we have in Database
Nice documentation (similar to Backbone.js) on first glance
Since, I haven't had time to go through both the plugins extensively, I would like to know from the experienced person following things:
Does both support AMD (like Requirejs)?
How easy to use the plugin with back-end-server like Ruby on Rails?
How easy to implement polymorphic relationship?
Biggest difference is that Backbone-relational fobids creating multiple instances of same model with identical ids. Consider:
let Person = Backbone.RelationalModel.extend({
relations: [
type: Backbone.HasMany,
key: 'likes_movies',
relatedModel: 'Movie'
]
});
let peter = new Person({
likes_movies: [{id: 1, title: 'Fargo'}, {id: 2, title: 'Adams Family'}]
);
let john = new Person({
likes_movies: [{id: 1, title: 'Fargo'}, {id: 2, title: 'Adams Family'}]
);
// Change title of one of Peter's movies
peter.get('likes_movies').get(1).set('title', 'Fargo 2 (Sequel)');
// John's corresponding movie will have changed its name
console.log(john.get('likes_movies').get(1)); // Fargo 2 (Sequel)
If rewritten for Backbone-associations, the movie title for John wouldn't have changed. This can be considered a feature or a disadvantage, depending on how you look at it.
Besides this, both libraries are very similar, except that development of Backbone-associations seems to have stopped almost a year ago.
Actually, based on both GitHub pulse (activity indicator), Backbone-relational community seems much more active.
I've studied both of these libraries when I was looking for something like EmberData or Restangular for Backbone.
Both of these libraries try to make up for the main Backbone weakness: handle properly Rest API Restful Resources.
In fact, Backbone promotes new models creation each time it need to be rendered (instead of reusing the instance used for another rendering somewhere else on the application).
Some inconsistencies then occurs because some model updates are not propagated everywhere on the web-application.
A cache of backbone model instances is then required..
Backbone Relational provides such a cache but Backbone Association doesn't.
Furthermore, both have reimplemented the code methods of Backbone (set, get, reset, trigger) so they are strongly coupled with Backbone.
That could complexify the Backbone library migrations especially if you use another MVC framework on top of Backbone (Marionnette, Thorax, Chaplin,...)
Backbone Association is lighter than Backbone Relational in terms of lines of code (800 vs 2000).
Backbone Association implementation is easier to debug because it directly manages relationships into the overloaded methods (set, get,...)
On the contrary, Backbone Relational relies on queues to synchronize relationship content with its internal store. That makes the debugging tricky...
Another lightweight (but less used) alternative is "Backbone SuperModel": http://pathable.github.io/supermodel/
This library is compound of 800 lines of code easier to understand than Backbone Relational (I was able to fix a little bug on it myself.)
It offers a backbone instance cache based on Backbone Collection
On my side,
I succeed to integrate the last one with RequireJs
I manage some polymorphic associations with it
A protocol between my web-application and my Java backend emerged
I succeed to upgrade Backbone and Marionette every time I need
Hey all, I'm looking at building an ajax-heavy site, and I'm trying to spend some time upfront thinking through the architecture.
I'm using Code Igniter and jquery. My initial thought process was to figure out how to replicate MVC on the javascript side, but it seems the M and the C don't really have much of a place.
A lot of the JS would be ajax calls BUT I can see it growing beyond that, with plenty of DOM manipulation, as well as exploring the HTML5 clientside database. How should I think about architecting these files? Does it make sense to pursue MVC? Should I go the jquery plugin route somehow? I'm lost as to how to proceed and I'd love some tips. Thanks all!
I've made an MVC style Javascript program. Complete with M and C. Maybe I made a wrong move, but I ended up authoring my own event dispatcher library. I made sure that the different tiers only communicate using a message protocol that can be translated into pure JSON objects (even though I don't actually do that translation step).
So jquery lives primarily in the V part of the MVC architecture. In the M, and C side, I have primarily code which could run in the stand alone CLI version of spidermonkey, or in the serverside rhino implementation of javascript, if necessary. In this way, if requirements change later, I can have my M and C layers run on the serverside, communicating via those json messages to the V side in the browser. It would only require some modifications to my message dispatcher to change this though. In the future, if browsers get some peer to peer style technologies, I could get the different teirs running in different browsers for instance.
However, at the moment, all three tiers run in a single browser. The event dispatcher I authored allows multicast messages, so implementing an undo feature now will be as simple as creating a new object that simply listens to the messages that need to be undone. Autosaving state to the server is a similar maneuver. I'm able to do full detailed debugging and profiling inside the event dispatcher. I'm able to define exactly how the code runs, and how quickly, when, and where, all from that central bit of code.
Of course the main drawback I've encountered is I haven't done a very good job of managing the complexity of the thing. For that, if I had it all to do over, I would study very very carefully the "Functional Reactive" paradigm. There is one existing implementation of that paradigm in javascript called flapjax. I would ensure that the view layer followed that model of execution, if not used specifically the flapjax library. (i'm not sure flapjax itself is such a great execution of the idea, but the idea itself is important).
The other big implementation of functional reactive, is quartz composer, which comes free with apple's developer tools, (which are free with the purchase of any mac). If that is available to you, have a close look at that, and how it works. (it even has a javascript patch so you can prototype your application with a prebuilt view layer)
The main takaway from the functional reactive paradigm, is to make sure that the view doesn't appear to maintain any kind of state except the one you've just given it to display. To put it in more concrete terms, I started out with "Add an object to the screen" "remove an object from the screen" type messages, and I'm now tending more towards "display this list of objects, and I'll let you figure out the most efficient way to get from the current display, to what I now want you to display". This has eliminated a whole host of bugs having to do with sloppily managed state.
This also gets around another problem I've been having with bugs caused by messages arriving in the wrong order. That's a big one to solve, but you can sidestep it by just sending in one big package the final desired state, rather than a sequence of steps to get there.
Anyways, that's my little rant. Let me know if you have any additional questions about my wartime experience.
At the risk of being flamed I would suggest another framework besides JQuery or else you'll risk hitting its performance ceiling. Its ala-mode plugins will also present a bit of a problem in trying to separate you M, V and C.
Dojo is well known for its Data Stores for binding to server-side data with different transport protocols, and its object oriented, lighting fast widget system that can be easily extended and customized. It has a style that helps guide you into clean, well-divisioned code – though it's not strictly MVC. That would require a little extra planning.
Dojo has a steeper learning curve than JQuery though.
More to your question, The AJAX calls and object (or Data Store) that holds and queries this data would be your Model. The widgets and CSS would be your View. And the Controller would basically be your application code that wires it all together.
In order to keep them separate, I'd recommend a loosely-coupled event-driven system. Try to directly access objects as little as possible, keeping them "black boxed" and get data via custom events or pub/sub topics.
JavaScriptMVC (javascriptmvc.com) is an excellent choice for organizing and developing a large scale JS application.
The architecture design is very practical. There are 4 things you will ever do with JavaScript:
Respond to an event
Request Data / Manipulate Services (Ajax)
Add domain specific information to the ajax response.
Update the DOM
JMVC splits these into the Model, View, Controller pattern.
First, and probably the most important advantage, is the Controller. Controllers use event delegation, so instead of attaching events, you simply create rules for your page. They also use the name of the Controller to limit the scope of what the controller works on. This makes your code deterministic, meaning if you see an event happen in a '#todos' element you know there has to be a todos controller.
$.Controller.extend('TodosController',{
'click' : function(el, ev){ ... },
'.delete mouseover': function(el, ev){ ...}
'.drag draginit' : function(el, ev, drag){ ...}
})
Next comes the model. JMVC provides a powerful Class and basic model that lets you quickly organize Ajax functionality (#2) and wrap the data with domain specific functionality (#3). When complete, you can use models from your controller like:
Todo.findAll({after: new Date()}, myCallbackFunction);
Finally, once your todos come back, you have to display them (#4). This is where you use JMVC's view.
'.show click' : function(el, ev){
Todo.findAll({after: new Date()}, this.callback('list'));
},
list : function(todos){
$('#todos').html( this.view(todos));
}
In 'views/todos/list.ejs'
<% for(var i =0; i < this.length; i++){ %>
<label><%= this[i].description %></label>
<%}%>
JMVC provides a lot more than architecture. It helps you in ever part of the development cycle with:
Code generators
Integrated Browser, Selenium, and Rhino Testing
Documentation
Script compression
Error reporting
I think there is definitely a place for "M" and "C" in JavaScript.
Check out AngularJS.
It helps you with your app structure and strict separation between "view" and "logic".
Designed to work well together with other libs, especially jQuery.
Full testing environment (unit, e2e) + dependency injection included, so testing is piece of cake with AngularJS.
There are a few JavaScript MVC frameworks out there, this one has the obvious name:
http://javascriptmvc.com/