Deferred objects VS backbone events for View updates in Marionette - javascript

I am learning Marionette / Backbone with the excellent book "Backbone.Marionette.js: A Gentle Introduction".
In the chapter 'Handling Data Latency', the Author (David Sulc) uses JQuery deferreds objects / promises to pass data from the fetch() action (called in entities/contact.js) to controllers (show_controller.js and list_controller.js in the example).
Code is available here : https://github.com/davidsulc/marionette-gentle-introduction/commit/de25497a1fa028f6a4471b0003645307a6e426f8
I understand how it works but in my previous readings on Backbone, this kind of thing was handled by Backbone events: we listen to a 'change' event on the model and we call render() when it is triggered. So I have a few questions about that :
1/ Are deferred objets better than backbone events for updating the view when we get a response from the server?
2/ What are the pros and the cons of the 2 solutions?
3/ Is it wrong / bad design to use backbone events in this situation?
Thanks for your help!

A specific use-case would be where you want to load/instantiate a view only when a model/collection has been fetched from the server. In that case you can use the Deferred object because that'll tell you exactly when the data has been loaded.
Of course you can listen to a change event on the model/collection, but you'd need to have a view instantiated already or add an extra listener to your model/collection to create the view.
So, to answer your questions:
1. Really depends, if you don't mind updating your view, then events are fine. If you want to instantiate a view later on, deferreds are a good fit.
2. See introduction
3. Not really. It also depends on the scale of your application. As S_I_R mentiones as well, it'll probably result in a bit cleaner code.

Ok, let me try explaining,
1.
Deferred Objects have promises so it contains callback built right in it. So in data driven web application its better to use this to get rid of callback hell for asynchronous functions.
2.
If you need to bind a user control event then you need trigger or events from Backbone.Events to do that, in that case you have to manually propagate your event through the controllers. On the other hand deferred objects can give status when it is finished, so after that you can populate your view or do other task.
3.
Well deferred mechanism was introduced to ease up situation where manual callbacks are overwhelming. It is much easier to use promises instead callback which makes code cleaner.
And as javascript being single threaded promises are more efficient for async operation.

1) Are deferred objects better than backbone events for updating the view when we get a response from the server?
Deferred objects are better.
I'll explain why. Relying on the view to re-render after the change or sync event on its model or collection can lead to visual anomalies. Lets take an example when you fetch a collection from a server and display its models as a list. When the collection has zero models you want to display 'No models found'. In this case before the collection is fetched and a 'change' event is fired, the user will see 'No models found' on the screen. As soon as the collection is fetched the view will re-render itself and now the user will see the models. This can lead to confusion. Although this is just one of the many use cases I am sure you get my point.
Once your object has been fetched from the server you can then bind the view's render() method to the object's 'change' or 'sync' event for any subsequent changes that may happen to the object.
2) What are the pros and the cons of the 2 solutions?
I think the answer to question 1 answers this question as well.
3) Is it wrong / bad design to use backbone events in this situation?
Probably. I would go with S_I_R's answer on this one.

Related

Achieving UI/logic separation when the logic requires callback functions

As far as I understand, in good practice, the UI code should invoke the logic whenever needed, but the logic should know nothing about the GUI ("loose coupling", see for example How can I separate the user interface from the business logic while still maintaining efficiency?).
I am currently writing a chrome web app that uses the chrome.serial api. Most functions from this api are non-blocking and instead invoke a callback function when their work is done. For example
chrome.serial.getDevices(callback)
searches for devices and than calls callback with a list of found devices.
Now, after chrome.serial.getDevices is called from the logic part of my code, its results ultimately have to be communicated back to the UI code.
How do I achieve clean UI/logic separation in this case? Does my UI need to register callback funcions with my logic code for every call it makes? That seems to violate the above principle of loose coupling and feels like it becomes very confusing very quickly.
You can use Promises. Initiate them in your controller code and pass them to the view. The view will then call its .then() method and display the result.
For example:
//controller.js
myAsyncTask = new Promise(resolve,reject=>{
chrome.serial.getDevices(resolve)
})
view(myAsyncTask);
//view.js
function view(myAsyncTask){
myAsyncTask.then(render);
}
If you are using build tools, such as Webpack or Browserify, then you can have your "logic object" extend Node's EventEmitter (there are other implementations that work in-browser, such as https://github.com/Olical/EventEmitter, if you don't want to bundle Node APIs in with a build tool).
Your "logic object", which is a specialized EventEmitter, operates the chrome async API, which contacts the serial devices, then processing the results according to your data layer rules, and then emits its own events when it has something useful for the UI.
The UI listens both listens to, and emits, events on your "logic object", depending on what's happening. Bonus: this event emitter can also be used by separate UI objects to communicate to each other, via events.
EventEmitter is the key that will make this kind of separation feel clean, simple, and extendable.

Promise-like syntax for JavaScript UI events [duplicate]

I am looking for a pub/sub mechanism that behaves like a promise but can resolve multiple times, and behaves like an event except if you subscribe after a notification has happened it triggers with the most recent value.
I am aware of notify, but deferred.notify is order-sensitive, so in that way it behaves just like an event. eg:
d.notify('notify before'); // Not observed :(
d.promise.progress(function(data){ console.log(data) });
d.notify('notify after'); // Observed
setTimeout(function(){ d.notify('notify much later') }, 100); // Also Observed
fiddle: http://jsfiddle.net/foLhag3b/
The notification system I'd like is a good fit for a UI component that should update to reflect the state of the data behind it. In these cases, you don't want to care about whether the data has arrived yet or not, and you want updates when they come in.
Maybe this is similar to Immediate mode UIs, but is distinct because it is message based.
The state of the art for message based UI updating, as far as I'm aware, is something which uses a promise or callback to initialize, then also binds an update event:
myUIComponent.gotData(model.data);
model.onUpdate(myUIComponent.gotData); // doing two things is 2x teh workz :(
I don't want to have to do both. I don't think anyone should have to, the use case is common enough to abstract.
model.whenever(myUIComponent.gotData); // yay one intention === one line of code
I could build a mechanism to do what I want, but I wanted to see if a pub/sub mechanism like this already exists. A lot of smart people have done a lot in CS and I figure this concept must exist, I just am looking for the name of it.
To be clear, I'm not looking to change an entire framework, say to Angular or React. I'm looking only for a pub/sub mechanism. Preferably an implementation of a known concept with a snazzy name like notifr or lemme-kno or touch-base.
You'll want to have a look at (functional) reactive programming. The concept you are looking for is known as a Behavior or Signal in FRP. It models the change of a value over time, and can be inspected at any time (continuously holds a value, in contrast to a stream that discretely fires events).
var ui = state.map(render); // whenever state updates, so does ui with render() result
Some popular libraries in JS are Bacon and Rx, which use their own terminology however: you'll find properties and observables.

Is firing off events from a RactiveJS components a common pattern?

http://examples.ractivejs.org/comments
There is a line in the above example:
// fire an event, so we can (for example)
// save the comment to our server
this.fire( 'newComment', comment );
I'm curious if this is a common practice in Ractive? Firing the event rather than shooting of an AJAX request in the component? Or instantiating some model object and calling a #save method on that object to fire off the request?
Is this separation of concerns? Testing? Just simplified example code?
var user = new Comment({ text: "text is here", author: "author name" });
user.save()
The only thing I can think of is that by firing the event off and letting something else handle it would possibly make testing simpler? It helps with separation of concerns, but it also seems to me like it would make it more difficult to track down who is actually handling the actual creation of the data?
In your opinions, who would handle the firing of the event? In the example it looks like you just tack it on to the "root" ractive instance and let it handle it up there? That seems like it would get really full in a real world application?
Also, as a side question to his one, how often do you find yourself using "models" with ractive on a real world application? Coming from the server-side world, I'm pretty used to thinking of things in terms of classes and domain models. However, the only "model" library I've seen to be popular on front-end side of things is Backbone. However, Backbone seems to be a little overkill for what I'm thinking about?
I'm curious if this is a common practice in Ractive? Firing the event rather than shooting of an AJAX request in the component? Or instantiating some model object and calling a #save method on that object to fire off the request?
Let's say your app needs an <input> element to call an endpoint via AJAX when someone types in something. It's not the <input> that calls the AJAX. It's the surrounding code that hooks on to some known event fired by the input that does the AJAX when the event is fired. Ractive components are given the facilities needed to operate in that way, but you're not necessarily required to do so.
how often do you find yourself using "models" with ractive on a real world application?
Ractive doesn't impose a convention. That's why the authors prefer to call it a library than a framework. You can use any programming pattern you think is necessary. I have used Ractive in the same way React components operate (one-way binding), and I know people who use Ractive merely as a templating engine. What you're provided is a set of API to be able to do stuff. It's up to you how you use it.
If you want to know if Ractive's the only one doing this, that's a no. Several other frameworks do components in one form or another: Ember, Angular (directives), React (Flux + stateless components), Riot, Polymer (web components).

Postpone Synchronize Backbone.js Collection with Server

I have a Backbone Collection that users are performing CRUD-type activities on. I want to postpone any changes from being propagated back to the server — the Collection.sync() should not happen until the user initiates it (like POSTING a form).
As it stands, I have been able to implement on-the-fly updates with no issue (by calling things like Model.destroy() on the models when deleted, or Collection.add() to add new models to the collection. As I understand, I could pass the {silent:true} option to my models, preventing .sync() from being called during .add()/.destroy(), but from what I can tell, that could lead to some headaches later.
I have considered overriding Backbone.sync, but I am not sure if that is the best route — I feel like there is some way to hook into some events, but I am not sure. Of course I have read through the Backbone docs, annotated source, and relevant SO questions before posting this, but I have hit a wall trying to extrapolate for this particular situation.
Eventually I will need to implement this in many places in my application, which is why I am concerned about best-practices at this stage. I am looking for some guidance/suggestions/thoughts on how to proceed with preventing the default behavior of immediately syncing changes with the remote server. Any help is appreciated — thank you for your time!
EDIT:
I went with Alex P's suggestion of refactoring: in my collection I set up some attributes to track the models that have been edited, added, or deleted. Then, when the user triggers the save action, I iterate through the lists and do the appropriate actions.
The first step is to ensure that your collection is being synchronised when you suspect it is. Collection.add() shouldn't trigger a Collection.sync() by default (it's not mentioned in the method documentation or the list of events, and I couldn't see a trigger in the annotated source).
Model.destroy() does trigger a sync(), but that shouldn't be a surprise - it's explicitly defined as "destroying a model on the server", and that sync() is performed on the model, not the collection. Your destroyed models will be removed from any collections that contain them, but I wouldn't expect those collections to sync() unless explicitly asked.
If your collections really are sync()ing when you're not expecting them to, then the most likely culprit is an event listener somewhere. Have you added any event listeners that call sync() for you when they see add or remove events? If your collection should sync() only on user interaction, can you remove those event listeners?
If not, then passing {silent: true} into your methods might be a viable approach. But remember that this is just stopping events from being emitted - it's not stopping that code from running. If something other than an event listener is triggering your sync()s, then preventing those events from being emitted won't stop them.
It would also be worth considering a wider refactor of your app. Right now you modify the collection and models immediately, and try to delay all sync()s until after the user clicks a button. What if you cached a list of all models to destroy & items to add, and only performed the actions when the button is clicked? Storing the model IDs would be sufficient to destroy them, and storing the collection ID and model ID would let you add items. It also means you don't have to fetch() the collection again if the user decides not to save their changes after all.

Run javascript on JSON change?

I know enough jQuery/JavaScript to be dangerous. I have a JSON array that I'm interacting with using two different elements (a calendar and a table, to be precise). Is there an event handler (or any other way) I could bind to so that the table would refresh when the JSON changes?
Basic programming, parse the json (=string) into a javascript object or array. (you probably have already done that.) Use an implementation of the observer patern.
I suggest taking a good look at #Adam Merrifield 's interesting links.
Most of the time using getters and setter where you can fire a custom event (or call a callback method) inside a setter is the key in this.
KnockoutJS is a good framework to help you do such binding. It also uses the observable - observer/subscriber pattern.
using timers is not a really good idea.. little to much overhead. (doing stuff also when nothing gets changed. And you will always hop x ms behind (depending on the polling frequency).
You might want to consider Knockout.JS
It allows bi-directional mapping, so a change to your model should reflect on your view and vice/versa.
http://knockoutjs.com/documentation/json-data.html
However, it might be late stages of your dev cycle, but something to consider.

Categories