NowJS server event notifications - javascript

I'm trying to implement a system where an external server (SuperFeedr) sends a request to my server (running Node) and my server processes, then sends that data straight to the client in realtime using NowJS.
Problem is, I cannot access the everyonce namespace in my server functions since it has to be initialized after the listen() function is called which has to happen after the functions are declared. So basically:
Needs:
NowJS->Listen->Server functions->everyone variable->NowJS
Seems I have a dependency loop and I have no idea how to resolve it.

Start all of them independently. When one of them is up, put a reference to it into a shared parent scope. When e.g. the server receives a notification, just drop it if nowjs isn't ready yet. Simplified example:
var a, b;
initializeA(function(a_) {
a = a_
a.on('request', function(request, response) {
if (!b) {
// B isn't ready yet, drop the request
return response.end()
}
// ...
})
})
initializeB(function(b_) {
b = b_
b.on('request', function(request, response) {
if (!a) {
// A isn't ready yet, drop the request
return response.end()
}
// ...
})
})

Related

How to add additional handlers to an existing connection?

Within my MVC 5 application, I am setting up a Signal R connection on the client end upon page load, this works as expected.
At some point later on I want add an additional handler and make a server side call, I can see that the server recieves this call which then initiates some client side calls, the handlers at the client don't get invoked.
Connection setup upon page load
function initialiseRealTimeDataRetrieval() {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeData = function (data) {
//Do Stuff
};
$.connection.hub.start().done(function () {
hub.server.getRealTimeData();
});
}
Additional calls made later on
function initialiseFeed () {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeDataFeed = function (data) {
//Do stuff
};
if ($.connection.hub.state == $.connection.connectionState.connected) {
hub.server.getRealTimeDataFeed();
}
else {
$.connection.hub.start().done(function () {
hub.server.getRealTimeDataFeed();
});
}
}
So far I have tried the following:
Made sure that calls made from the client to server are being invoked on the server.
Made sure that the additional calls are work as expected if they were made along with the calls and handlers executing upon page load.
Reviewd documentation to see if a connection must be restarted to register the new handlers.
Attempted various methods of restarting the connection after new handlers were added
The below works as expected for the additional calls however makes everything done for the connection upon page load redundant:
function initialiseFeed () {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeDataFeed = function (data) {
//Do stuff
};
$.connection.hub.stop();
$.connection.hub.start().done(function () {
hub.server.getRealTimeDataFeed();
});
}
Inspecting the hub object through the debugger does show that all clients are connected, including the additional ones.
According to the Signal R JS API Docs, the automaically generated proxy for the hub can't be used to register multiple event handler:
When to use the generated proxy
If you want to register multiple event handlers for a client method
that the server calls, you can't use the generated proxy. Otherwise,
you can choose to use the generated proxy or not based on your coding
preference. If you choose not to use it, you don't have to reference
the "signalr/hubs" URL in a script element in your client code.
Also to register new handlers for an existing connection, that connection must have at least one handler associated with it prior to establishing a connection, upon registering new handlers you must call start():
Note
Normally you register event handlers before calling the start method
to establish the connection. If you want to register some event
handlers after establishing the connection, you can do that, but you
must register at least one of your event handler(s) before calling the
start method. One reason for this is that there can be many Hubs in an
application, but you wouldn't want to trigger the OnConnected event on
every Hub if you are only going to use to one of them. When the
connection is established, the presence of a client method on a Hub's
proxy is what tells SignalR to trigger the OnConnected event. If you
don't register any event handlers before calling the start method, you
will be able to invoke methods on the Hub, but the Hub's OnConnected
method won't be called and no client methods will be invoked from the
server.

Setting a cron job for a Meteor Method

I have this piece of code:
Meteor.methods({
GetTickerInfo: function(){
Future = Npm.require('fibers/future');
var myFuture = new Future();
kraken.api('Ticker', {"pair": 'ETHXBT'}, function(error, data) {
if(error) {
console.log(error);
}
else {
console.log(data.result);
console.log(data.result.XETHXXBT.a);
myFuture.return(data.result);
}
});
console.log("EHEHEHEHEHHEEH");
console.log(myFuture.wait());
return myFuture.wait();
}
});
What it does it calls an API, gets some data back and when it's done it returns the data to the client so I can visualise in the graph. For now its a MANUAL click button on the client side which calls the method, does the job, and returns the data.
I would like to schedule a cron to do that. So every 5 sec make a API call and return the data back to the client (because there is where I visualise it). All the cron jobs are working with specific functions but I can't access the this function GetTickerInfo because it is defined and in the scope of Meteor.methods.
How can I call it be a cron job, but also leave the occasional Meteor Call from the client side when I want to manualy refresh in the given moment?
Can anyone show how would they implement this with for e.g. CRON package: percolatestudio/meteor-synced-cron
You have to be outside of the methods scope and I would personally do:
SyncedCron.add({
name: 'GetTickerInfo cron',
schedule: function(parser) {
return parser.text('every 5 seconds');
},
job: function() {
Meteor.call('GetTickerInfo');
}
});
SyncedCron.start()

SignalR-Hub after IIS stop,start will no longer call client functions

I have a queue system using SignalR 2.1.1 with Angular. Everything is working perfectly actually. However when I decided to test the system against an IIS outage I noticed a problem. When I stop, then start IIS, IIS restart doesn't cause the issue, my javascript functions that the hub calls will no longer fire. That makes sense to me, but the problem is that the client can still call the server without any issue so the user has no idea they are disconnected. This would certainly mess up my queue state.
So, the solution would seem to be able to test this disconnect somehow and reconnect if necessary. Is there a way to test to see if the client functions my hub is calling are still connected? It seems that since I can call the hub that it should have to reconnect although I don't see any of that activity happening. I've tried the disconnected, reconnecting, stateChanged events on the client side to see if I could catch that happening with no luck.
Thank you for any assistance
So my solution was to create a method on the hub that only responds to the caller:
public void LastChange()
{
Clients.Caller.lastChange();
}
I hooked that call back to this function in my Angular controller:
vm.queueHub.client.lastChange = function onLastChange()
{
vm.lastChangeCalledBack = true;
}
Also in my controller I created this function that tests for the lastChangeCalledBack variable which is set by the function the hub calls. If it's not set after some interval testing I assume we've lost connection:
vm.stillAlive = function()
{
vm.queueHub.server.lastChange();
var found = $interval(function()
{
if (vm.lastChangeCalledBack == true)
{
vm.lastChangeCalledBack = false;
$interval.cancel(found);
}
}, 100, 10);
return found;
}
Finally I created this function in my controller and call it from any functions that make queue changes from the UI and pass in the callback to call if the connection is still valid. For some reason the promise seems to be reverse of what the Angular documentation says, but I must be misunderstanding: $interval docs
function verifyConnection(callback)
{
vm.stillAlive().then(
function (data) {
console.log("Lost connection with server: " + data);
signalrFactory.start();
var reconnectedMessage = "There was a server disconnect. Your connection has been re-established, but you should reload your browser."
getQueue(function () { alert(reconnectedMessage); });
},
function (data) {
console.log("Server connection intact: " + data);
callback();
}
);
}
So for example, this is called from the UI to open a modal:
vm.open = function (item)
{
verifyConnection(function () {
openFlagModal(item);
});
};
I also plan to call the verifyConnection() function periodically as well. This solution seems to work and keep all the clients in sync with the server no matter what. However, I don't like the fact that the SignalR client is already sending pings to the server, and re-establishing the connection, just not reconnecting the callback client methods. It makes me wonder if I'm doing something wrong to cause the client functions to not get reconnected.
Any thoughts on this solution?

Efficient closure structure in node.js

I'm starting to write a server in node.js and wondering whether or not I'm doing things the right way...
Basically my structure is like the following pseudocode:
function processStatus(file, data, status) {
...
}
function gotDBInfo(dbInfo) {
var myFile = dbInfo.file;
function gotFileInfo(fileInfo) {
var contents = fileInfo.contents;
function sentMessage(status) {
processStatus(myFile, contents, status);
}
sendMessage(myFile.name + contents, sentMessage);
}
checkFile(myFile, gotFileInfo);
}
checkDB(query, gotDBInfo);
In general, I'm wondering if this is the right way to code for node.js, and more specifically:
1) Is the VM smart enough to run "concurrently" (i.e. switch contexts) between each callback to not get hung up with lots of connected clients?
2) When garbage collection is run, will it clear the memory completely if the last callback (processStatus) finished?
Node.js is event-based, all codes are basically handlers of events. The V8 engine will execute-to-end any synchronous code in the handler and then process the next event.
Async call (network/file IO) will post an event to another thread to do the blocking IO (that's in libev libeio AFAIK, I may be wrong on this). Your app can then handle other clients. When the IO task is done, an event is fired and your callback function is called upon.
Here's an example of aync call flow, simulating a Node app handling a client request:
onRequest(req, res) {
// we have to do some IO and CPU intensive task before responding the client
asyncCall(function callback1() {
// callback1() trigger after asyncCall() done it's part
// *note that some other code might have been executed in between*
moreAsyncCall(function callback2(data) {
// callback2() trigger after moreAsyncCall() done it's part
// note that some other code might have been executed in between
// res is in scope thanks to closure
res.end(data);
// callback2() returns here, Node can execute other code
// the client should receive a response
// the TCP connection may be kept alive though
});
// callback1() returns here, Node can execute other code
// we could have done the processing of asyncCall() synchronously
// in callback1(), but that would block for too long
// so we used moreAsyncCall() to *yield to other code*
// this is kind of like cooperative scheduling
});
// tasks are scheduled by calling asyncCall()
// onRequest() returns here, Node can execute other code
}
When V8 does not have enough memory, it will do garbage collection. It knows whether a chunk of memory is reachable by live JavaScript object. I'm not sure if it will aggressively clean up memory upon reaching end of function.
References:
This Google I/O presentation discussed the GC mechanism of Chrome (hence V8).
http://platformjs.wordpress.com/2010/11/24/node-js-under-the-hood/
http://blog.zenika.com/index.php?post/2011/04/10/NodeJS

Handling interdependent and/or layered asynchronous calls

As an example, suppose I want to fetch a list of files from somewhere, then load the contents of these files and finally display them to the user. In a synchronous model, it would be something like this (pseudocode):
var file_list = fetchFiles(source);
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
This provides decent feedback to the user and I can move pieces of code into functions if I so deem necessary. Life is simple.
Now, to crush my dreams: fetchFiles() and loadFile() are actually asynchronous. The easy way out is to transform them into synchronous functions. But this is not good if the browser locks up waiting for calls to complete.
How can I handle multiple interdependent and/or layered asynchronous calls without delving deeper and deeper into an endless chain of callbacks, in classic reductio ad spaghettum fashion? Is there a proven paradigm to cleanly handle these while keeping code loosely coupled?
Deferreds are really the way to go here. They capture exactly what you (and a whole lot of async code) want: "go away and do this potentially expensive thing, don't bother me in the meantime, and then do this when you get back."
And you don't need jQuery to use them. An enterprising individual has ported Deferred to underscore, and claims you don't even need underscore to use it.
So your code can look like this:
function fetchFiles(source) {
var dfd = _.Deferred();
// do some kind of thing that takes a long time
doExpensiveThingOne({
source: source,
complete: function(files) {
// this informs the Deferred that it succeeded, and passes
// `files` to all its success ("done") handlers
dfd.resolve(files);
// if you know how to capture an error condition, you can also
// indicate that with dfd.reject(...)
}
});
return dfd;
}
function loadFile(file) {
// same thing!
var dfd = _.Deferred();
doExpensiveThingTwo({
file: file,
complete: function(data) {
dfd.resolve(data);
}
});
return dfd;
}
// and now glue it together
_.when(fetchFiles(source))
.done(function(files) {
for (var file in files) {
_.when(loadFile(file))
.done(function(data) {
display(data);
})
.fail(function() {
display('failed to load: ' + file);
});
}
})
.fail(function() {
display('failed to fetch list');
});
The setup is a little wordier, but once you've written the code to handle the Deferred's state and stuffed it off in a function somewhere you won't have to worry about it again, you can play around with the actual flow of events very easily. For example:
var file_dfds = [];
for (var file in files) {
file_dfds.push(loadFile(file));
}
_.when(file_dfds)
.done(function(datas) {
// this will only run if and when ALL the files have successfully
// loaded!
});
Events
Maybe using events is a good idea. It keeps you from creating code-trees and de-couples your code.
I've used bean as the framework for events.
Example pseudo code:
// async request for files
function fetchFiles(source) {
IO.get(..., function (data, status) {
if(data) {
bean.fire(window, 'fetched_files', data);
} else {
bean.fire(window, 'fetched_files_fail', data, status);
}
});
}
// handler for when we get data
function onFetchedFiles (event, files) {
for (file in files) {
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
// handler for failures
function onFetchedFilesFail (event, status) {
display('Failed to fetch list. Reason: ' + status);
}
// subscribe the window to these events
bean.on(window, 'fetched_files', onFetchedFiles);
bean.on(window, 'fetched_files_fail', onFetchedFilesFail);
fetchFiles();
Custom events and this kind of event handling is implemented in virtually all popular JS frameworks.
Sounds like you need jQuery Deferred. Here is some untested code that might help point you in the right direction:
$.when(fetchFiles(source)).then(function(file_list) {
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) {
$.when(loadFile(file)).then(function(data){
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
});
}
}
});
I also found another decent post which gives a few uses cases for the Deferred object
If you do not want to use jQuery, what you could use instead are web workers in combination with synchronous requests. Web workers are supported across every major browser with the exception of any Internet Explorer version before 10.
Web Worker browser compatability
Basically, if you're not entirely certain what a web worker is, think of it as a way for browsers to execute specialized JavaScript on a separate thread without impacting the main thread (Caveat: On a single-core CPU, both threads will run in an alternating fashion. Luckily, most computers nowadays come equipped with dual-core CPUs). Usually, web workers are reserved for complex computations or some intense processing task. Just keep in mind that any code within the web worker CANNOT reference the DOM nor can it reference any global data structures that have not been passed to it. Essentially, web workers run independent of the main thread. Any code that the worker executes should be kept separate from the rest of your JavaScript code base, within its own JS file. Furthermore, if the web workers need specific data in order to properly work, you need to pass that data into them upon starting them up.
Yet another important thing worth noting is that any JS libraries that you need to use to load the files will need to be copied directly into the JavaScript file that the worker will execute. That means these libraries should first be minified(if they haven't been already), then copied and pasted into the top of the file.
Anyway, I decided to write up a basic template to show you how to approach this. Check it out below. Feel free to ask questions/criticize/etc.
On the JS file that you want to keep executing on the main thread, you want something like the following code below in order to invoke the worker.
function startWorker(dataObj)
{
var message = {},
worker;
try
{
worker = new Worker('workers/getFileData.js');
}
catch(error)
{
// Throw error
}
message.data = dataObj;
// all data is communicated to the worker in JSON format
message = JSON.stringify(message);
// This is the function that will handle all data returned by the worker
worker.onMessage = function(e)
{
display(JSON.parse(e.data));
}
worker.postMessage(message);
}
Then, in a separate file meant for the worker (as you can see in the code above, I named my file getFileData.js), write something like the following...
function fetchFiles(source)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
function loadFile(file)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
onmessage = function(e)
{
var response = [],
data = JSON.parse(e.data),
file_list = fetchFiles(data.source),
file, fileData;
if (!file_list)
{
response.push('failed to fetch list');
}
else
{
for (file in file_list)
{ // iteration, not enumeration
fileData = loadFile(file);
if (!fileData)
{
response.push('failed to load: ' + file);
}
else
{
response.push(fileData);
}
}
}
response = JSON.stringify(response);
postMessage(response);
close();
}
PS: Also, I dug up another thread which would better help you understand the pros and cons of using synchronous requests in combination with web workers.
Stack Overflow - Web Workers and Synchronous Requests
async is a popular asynchronous flow control library often used with node.js. I've never personally used it in the browser, but apparently it works there as well.
This example would (theoretically) run your two functions, returning an object of all the filenames and their load status. async.map runs in parallel, while waterfall is a series, passing the results of each step on to the next.
I am assuming here that your two async functions accept callbacks. If they do not, I'd require more info as to how they're intended to be used (do they fire off events on completion? etc).
async.waterfall([
function (done) {
fetchFiles(source, function(list) {
if (!list) done('failed to fetch file list');
else done(null, list);
});
// alternatively you could simply fetchFiles(source, done) here, and handle
// the null result in the next function.
},
function (file_list, done) {
var loadHandler = function (memo, file, cb) {
loadFile(file, function(data) {
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
// if any of the callbacks to `map` returned an error, it would halt
// execution and pass that error to the final callback. So we don't pass
// an error here, but rather a tuple of the file and load result.
cb(null, [file, !!data]);
});
};
async.map(file_list, loadHandler, done);
}
], function(err, result) {
if (err) return display(err);
// All files loaded! (or failed to load)
// result would be an array of tuples like [[file, bool file loaded?], ...]
});
waterfall accepts an array of functions and executes them in order, passing the result of each along as the arguments to the next, along with a callback function as the last argument, which you call with either an error, or the resulting data from the function.
You could of course add any number of different async callbacks between or around those two, without having to change the structure of the code at all. waterfall is actually only 1 of 10 different flow control structures, so you have a lot of options (although I almost invariably end up using auto, which allows you to mix parallel and series execution in the same function via a Makefile like requirements syntax).
I had this issue with a webapp I'm working on and here's how I solved it (with no libraries).
Step 1: Wrote a very lightweight pubsub implementation. Nothing fancy. Subscribe, Unsubscribe, Publish and Log. Everything (with comments) adds up 93 lines of Javascript. 2.7kb before gzip.
Step 2: Decoupled the process you were trying to accomplish by letting the pubsub implementation do the heavy lifting. Here's an example:
// listen for when files have been fetched and set up what to do when it comes in
pubsub.notification.subscribe(
"processFetchedResults", // notification to subscribe to
"fetchedFilesProcesser", // subscriber
/* what to do when files have been fetched */
function(params) {
var file_list = params.notificationParams.file_list;
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
);
// trigger fetch files
function fetchFiles(source) {
// ajax call to source
// on response code 200 publish "processFetchedResults"
// set publish parameters as ajax call response
pubsub.notification.publish("processFetchedResults", ajaxResponse, "fetchFilesFunction");
}
Of course this is very verbose in the setup and scarce on the magic behind the scenes.
Here's some technical details:
I'm using setTimeout to handle triggering subscriptions. This way they run in a non-blocking fashion.
The call is effectively decoupled from the processing. You can write a different subscription to the notification "processFetchedResults" and do multiple things once the response comes through (for example logging and processing) while keeping them in very separate, tiny and easily-managed code blocks.
The above code sample doesn't address fallbacks or run proper checks. I'm sure it will require a bit of tooling to get to production standards. Just wanted to show you how possible it is and how library-independent your solution can be.
Cheers!

Categories