As far as I understand, in good practice, the UI code should invoke the logic whenever needed, but the logic should know nothing about the GUI ("loose coupling", see for example How can I separate the user interface from the business logic while still maintaining efficiency?).
I am currently writing a chrome web app that uses the chrome.serial api. Most functions from this api are non-blocking and instead invoke a callback function when their work is done. For example
chrome.serial.getDevices(callback)
searches for devices and than calls callback with a list of found devices.
Now, after chrome.serial.getDevices is called from the logic part of my code, its results ultimately have to be communicated back to the UI code.
How do I achieve clean UI/logic separation in this case? Does my UI need to register callback funcions with my logic code for every call it makes? That seems to violate the above principle of loose coupling and feels like it becomes very confusing very quickly.
You can use Promises. Initiate them in your controller code and pass them to the view. The view will then call its .then() method and display the result.
For example:
//controller.js
myAsyncTask = new Promise(resolve,reject=>{
chrome.serial.getDevices(resolve)
})
view(myAsyncTask);
//view.js
function view(myAsyncTask){
myAsyncTask.then(render);
}
If you are using build tools, such as Webpack or Browserify, then you can have your "logic object" extend Node's EventEmitter (there are other implementations that work in-browser, such as https://github.com/Olical/EventEmitter, if you don't want to bundle Node APIs in with a build tool).
Your "logic object", which is a specialized EventEmitter, operates the chrome async API, which contacts the serial devices, then processing the results according to your data layer rules, and then emits its own events when it has something useful for the UI.
The UI listens both listens to, and emits, events on your "logic object", depending on what's happening. Bonus: this event emitter can also be used by separate UI objects to communicate to each other, via events.
EventEmitter is the key that will make this kind of separation feel clean, simple, and extendable.
Related
It seems to me that the "core" node.js callback syntax, i.e.
function foo(data, callback) {
callback(false, data2);
}
is semantically superseded by events, except that
With events, you lose the last bit of static checking
Events are more flexible
Once you have more than 2 or 3 callback-y functions, callbacks get rather clunky
Events might be a very slight perfomance overhead (but premature optimization would be an understatement in almost all cases)
(But then again, you have to memorize the events, too...)
So what would be a good policy for when to use what?
A good policy is to use whatever abstraction best models your use cases
I think performance is a non-issue in this case.
If you are providing a function to a client that performs an asynchronous call, exposing it as a single function (like your example) would seem to be completely valid, and seems to be pretty clean. (this seems to be the way most of the node.js db clients work).
Callbacks quickly get out of hand, when there are more than 2-3, as you mentioned. But would a 2-3 callback function be better modeled as an event emitter?? Maybe, and that's up to you.
IMO 2-3+ callbacks would definately be better modeled using promises, as the calling structure would be flatter.
IMO Event Emitters are often used in longer standing objects. Objects that are alive for a "longer" duration. Where you would like to create an object and subscribe to events over some period of time, which seems to be a completely different use case than a single async function that exposes a callback.
Another option is to model your client as a stream.
I think a good rule of thumb is to see where node standard library (and popular node libararies) applies event emitter to clients, vs where it provides a callback based api to clients.
Node models its tcp client/server as an event emitter
http://examples.ractivejs.org/comments
There is a line in the above example:
// fire an event, so we can (for example)
// save the comment to our server
this.fire( 'newComment', comment );
I'm curious if this is a common practice in Ractive? Firing the event rather than shooting of an AJAX request in the component? Or instantiating some model object and calling a #save method on that object to fire off the request?
Is this separation of concerns? Testing? Just simplified example code?
var user = new Comment({ text: "text is here", author: "author name" });
user.save()
The only thing I can think of is that by firing the event off and letting something else handle it would possibly make testing simpler? It helps with separation of concerns, but it also seems to me like it would make it more difficult to track down who is actually handling the actual creation of the data?
In your opinions, who would handle the firing of the event? In the example it looks like you just tack it on to the "root" ractive instance and let it handle it up there? That seems like it would get really full in a real world application?
Also, as a side question to his one, how often do you find yourself using "models" with ractive on a real world application? Coming from the server-side world, I'm pretty used to thinking of things in terms of classes and domain models. However, the only "model" library I've seen to be popular on front-end side of things is Backbone. However, Backbone seems to be a little overkill for what I'm thinking about?
I'm curious if this is a common practice in Ractive? Firing the event rather than shooting of an AJAX request in the component? Or instantiating some model object and calling a #save method on that object to fire off the request?
Let's say your app needs an <input> element to call an endpoint via AJAX when someone types in something. It's not the <input> that calls the AJAX. It's the surrounding code that hooks on to some known event fired by the input that does the AJAX when the event is fired. Ractive components are given the facilities needed to operate in that way, but you're not necessarily required to do so.
how often do you find yourself using "models" with ractive on a real world application?
Ractive doesn't impose a convention. That's why the authors prefer to call it a library than a framework. You can use any programming pattern you think is necessary. I have used Ractive in the same way React components operate (one-way binding), and I know people who use Ractive merely as a templating engine. What you're provided is a set of API to be able to do stuff. It's up to you how you use it.
If you want to know if Ractive's the only one doing this, that's a no. Several other frameworks do components in one form or another: Ember, Angular (directives), React (Flux + stateless components), Riot, Polymer (web components).
I'm familiarising myself with both Flux architecture, and Reflux - the simpler version, without a dispatcher - for use with ReactJS.
In full Flux, it sounds like actions have (or at least, can be made to have) a definite and non-trivial purpose: they can be used to update external services (eg. save data back to the server via an API), as described in this question: Should flux stores, or actions (or both) touch external services?
However, in Reflux, the actions are definitely just dumb message parsers. So my question is, what purpose do they serve? Why have them at all? What bad things would happen if your Views/Components just called methods on your store directly?
I'm about to convert my little app from Flux to Reflux, and it looks like I'll be moving all the logic currently in my actions over to the store. It seems to me like the actions in Reflux do nothing other than act as a useless middleman between the component and the store. What am I missing?
Besides the ability to listen for an action in any number of stores as pointed out in the OP comments:
Reflux actions can also touch APIs and such, by placing it in the preEmit action hook for example. Typically you'd create an async action, add some async stuff in the preEmit hook, and then call the "completed" or "failed" sub-actions when the async work is done (or has failed). You can also do sync work in the preEmit hook of course. Logging comes to mind.
It's also fully possible to listen to an action in a store and perform the async work in the store, then call .completed or .failed from there, but I think the consensus that has formed is that stores shouldn't do that work, stores should just react to changing data, not perform business logic.
Async actions also work as promises. Of course you could implement your stores to do that as well, but I'd argue that that's a lot of responsibilities for a data store.
I have a implemented a singleton following the model described in the book by Addy Osmani book, Learning Javascript design patterns.
This singleton is setting up a soap connection. This is an asynchronous call and I want to perform that in the getInstance call so the follow on calls can be guaranteed to have the connection that is completely up...
One thought i have is to pass in a callback to getInstance, make that call in my main.js function and by the time other scripts get to needing a connection, it will be up. And every other consumer of the soap connection, pass it null for the callback.
Is this a hack or a good way to do this?
If it is a not standard way of doing this, what do you suggest?
When dealing with events, such as XMLHTTPRequest (be it SOAP or JSON) it's common to use a callback function.
However it's preferred to use Promises. Promises are designed to be adept at dealing with asynchronousness. The most notable advantage over callbacks is that Promises come with error handling, progression and cancellation built in.
Most popular frameworks and libraries include an implementation of Promises.
A minor note: a Singleton as a design pattern is, more often that not, an anti-pattern. Be very weary when using it, especially in the face of testability. I'm not familiar with Addy Osmani's work so I can't comment on this specific case.
The notion of a Singleton becomes moot when you apply good Dependency Injection.
There are few ways of doing this:
Make you singleton an EventEmitter. Emit event ready or initilized when initialization completes. Problem: if a client starts listening after singleton is initialized, it will never catch initialized event. You can add initialized property and set it to true when initialization completes, to allow clients check object status. Still using it will require static check of the .initialized property, then setting listener or proceeding right away.
Add callback to getInstance. If the object is initialized already, callback is called on next tick.
Queue all requests before initialization is completed. It's super-convinient, but also complex to implement.
By the way, don't use getInstance in node.js, it's more like java-style. Just module.exports = new MyClass will do. In this case 2 method is not applicable as is, but you can ad a special method for just setting such a callback, like onReady().
I have been trying to find a solution to what seems to be relatively simple scenario. I have JavaScript running in an html page that makes a call to an XPcom that I have written in C++. So far, so good.
This XPcom retrieves a reference to the observer-service (do_GetService("#mozilla.org/observer-service;1")); fires a notify which is 'seen' by the JavaScript; the XPcom creates a windows thread (AFX), passes a copy of the observer-service reference and returns to the JavaScript caller; expecting the XPcom thread to send additional notifies at appropriate times (based on external events).
The notifies however fail to arrive in the JavaScript and my initial thought was the notify method will not deliver notifications from a 'different' thread. I've used the VStudio debugger to confirm the program sequence is as expected; ie the external event is being received by the thread and the notify method is being called... but, no notify event arrives.
I've read quite a few postings across the web and nothing really 'nails' the particular scenario I'm trying to address. I'm not married to the idea of using notify:
I've tried event notification via NS_DispatchToCurrentThread however that is not working out either because I don't have an "event" from the JavaScript side to deliver. I can create one of my own within the context of the XPcom and I can 'notify it'; but, that was just a POC to prove I could deliver events from XPcom; now I need for the JavaScript side to give me an event to notify;
I've tried passing a new'ed object as a nsiSupports arg; but, the DispatchToCurrentThread wants an nsiRunnable and I cannot figure out how to pass one of those (the idl file does not support);
I've also considered 'wrapping' the event with some sort of object that is compatible with nsiSupports; but, am unsure about the details of doing so.
The objective is quite simple. deliver asynchronous events or notifications from an XPcom thread to the main, or even sub, thread of the JavaScript; but, I'm getting less than 10% traction here.
Has anyone accomplished this, or have ideas as to how to get it done?
You are correct, the observer service works only on the main thread (check the return value of your Notify() call, it should be NS_ERROR_UNEXPECTED). You are also correct as far as the solution goes - dispatch an event to the main thread, via NS_DispatchToMainThread(). As to the actual implementation, function NS_NewRunnableMethod() should help (unless you want to create your custom nsIRunnable implementation - e.g. to pass parameters). You probably want to do something like this:
nsCOMPtr<nsIRunnable> event = NS_NewRunnableMethod(xpcomComponent, &nsMyComponent::processEvent);
nsresult rv = NS_DispatchToMainThread(event);
Here xpcomComponent would be a reference to your XPCOM component instance and nsMyComponent::processEvent its method that needs to be called on main thread. That method would then be able to notify the observer.