I've got an app I'm writing in React Native. It's socketed and I have a file that controls all socket information.
import {Alert, AppState} from 'react-native';
import store from '../store/store';
import {updateNotifications} from '../reducers/notifications';
import {setError, clearError} from '../reducers/error';
import {updateCurrentEvent, updateEventStatus, setCurrentEvent} from '../reducers/event_details';
import {setAlert} from '../reducers/alert';
import {ws_url} from '../api/urls'
let conn = new WebSocket(ws_url);
/*
handleSocketConnections handles any actions that require rerouting. The rest are passed off to handleOnMessage
This is being called from authLogin on componentDidMount. It would be ideal to only initialize a socket conn
when a user logs in somehow, but this package gets ran when a user opens the app, meaning there are socket
connections that don't need to exist yet.
*/
function setAppStateHandler() {
AppState.addEventListener('change', cstate => {
if(cstate === 'active') {
reconnect()
}
})
}
export const handleSocketConnections = (navigator, route) => {
setAppStateHandler();
conn.onmessage = e => {
const state = store.getState();
const msg = JSON.parse(e.data);
const { type, payload, event_id } = msg;
const { event } = state.event_details.event_details;
if (type == "SET_EVENT_STATUS" && payload == "CLOSED" && event_id == event.event_id) {
navigator.push(route)
// store.dispatch(setAlert({
// message:"Event is closed, click to navigate to checkout."
// , scene: null
// }))
store.dispatch(updateEventStatus(payload));
} else {
handleOnMessage(msg, state)
}
}
}
export function reconnect() {
//TODO: Fatal errors should redirect the mainNav to a fatal error screen. Not dismount the nav entirely, as it does now
//and this should pop the error screen when it's fixed.
let state = store.getState();
conn = new WebSocket(ws_url);
setTimeout(function () {
if (conn.readyState == 1) {
if (typeof state.event_details.event_details != 'undefined') {
setSocketedEventInfo(state.event_details.event_details.event.event_id);
}
store.dispatch(clearError());
} else {
store.dispatch(setError('fatal',`Socket readyState should be 1 but it's ${conn.readyState}`))
}
}, 1000);
}
//Preform function on ES close.
conn.onclose = e => {
console.log("Closing wsbidder, ", `${e.code} -- ${e.reason}`);
//TODO: Set error here saying they need to restart the app. Maybe a 'reconnect' somehow?
//Maybe set a store variable to socketErr and if null, all is good. Else, panic the app?
//Use Case: Server is not started and user tries to connect to the app. String of e.message contains "Connection refused"
store.dispatch(setError("fatal", `Socket onclose: ${e.code} -- ${e.reason}`))
};
conn.onerror = e => {
console.log("Error at socket, ", e);
store.dispatch(setError("fatal", `Socket onerror: ${e.message}`))
};
//Initialization function for websocket.
// conn.onopen = e => console.log("Opening wsbidder, ", e)
function handleOnMessage(msg, state) {
switch (msg.type) {
//These types come from the SocketWrappers on the server.
//updateCurrentEvent should be filtering the event by event_id.
case "EVENT_ITEMS":
store.dispatch(updateCurrentEvent(
msg.payload
, state.user_info.uid
, state.event_details.event_details.event.event_id));
break;
case "NOTIFICATIONS":
//bug: this needs to filter notifications per event on the client-side.
store.dispatch(updateNotifications(
msg.payload
, state.event_details.event_details.event.event_id
, state.user_info.uid)
);
break;
case "NOT_BIDDABLE":
if (msg.event_id == state.event_details.event_details.event.event_id) {
store.dispatch(updateEventStatus("CLOSED"));
}
break;
case "PUSH_NOTIFICATION":
const {title, message} = msg.payload;
Alert.alert(title, message);
break;
default:
console.warn(`Unrecognized socket action type: ${msg.type}`);
}
}
//closes the socket connection and sends a reason to the server.
export const closeConn = reason => conn.close(null, reason);
export const setSocketedEventInfo = event_id => {
//Gives the event ID to the socketed connection, which pulls end dates.
const msg = {
type: "UPDATE_EVENT_DETAILS"
, payload: { event_id }
}
conn.send(JSON.stringify(msg));
}
export const createBid = (bid, cb) => {
/*
Expects:
const new_bid = {
item_id: item.item_id,
bid: amount, //Storage keeps storing it as a string
uid: 0, //Not needed here, but can't be null since the server wants an int.
event_id, key, bidder
};
*/
const new_bid = {
type: 'BID'
, payload: bid
};
// Send this to the server socket
conn.send(JSON.stringify(new_bid));
//Returning the callback so the front-end knows to flip the card back over.
return cb()
};
Some of the code is crap, I know. Unless you're giving true advice, which I'm always glad to follow, no need to bash it :-)
The issue I'm having is that when the socket dies (the conn variable), I can't re-initialize the socket and assign it to that conn variable. What I think is happening is all functions using the conn variable aren't using the 'new' one, still stuck to the 'old' one.
Line 9 -- Creating the original one.
Line 28 -- Creating an onMessage function for the conn object, within the handleSocketConnections function that gets called elsewhere at the start of the program
Line 57 -- Trying to re-assign a new connection to the conn variable in the reconnect function, that gets run whenever the app goes on standby (killing the socket connections).
Line 131 -- This gets called correctly from the reconnect function, connecting the socket to the server again
The reconnect() function runs correctly - the server registers the new connection with all the right info, but the app seems to still be in a weird state where there's no conn error (possibly looking at the new one??) but no actions are formed on the conn (possibly looking at the old one?).
Any ideas?
If you have to start a replacement webSocket connection, then you will need to rerun all the code that hooks up to the webSocket (installs event handlers, etc...). Because it's a new object, the old event listeners aren't associated with the new webSocket object.
The simplest way to do that is usually to create a single webSocketInit() function that you call both when you first create your webSocket connection and then call again any time you have to replace it with a new one. You can pass the latest webSocket object to webSocketInit() so any other code can see the new object. Individual blocks of code can register for onclose themselves if they want to know when the old one closes.
There are also more event-driven ways to do this by creating an EventEmitter that gets notified whenever the webSocket has been replaced and individual blocks of code can subscribe to that event if they want to get notified of that occurrence.
Related
I'm trying to implement a WebSocket with a fallback to polling. If the WebSocket connection succeeds, readyState becomes 1, but if it fails, readyState is 3, and I should begin polling.
I tried something like this:
var socket = new WebSocket(url);
socket.onmessage = onmsg;
while (socket.readyState == 0)
{
}
if (socket.readyState != 1)
{
// fall back to polling
setInterval(poll, interval);
}
I was expecting socket.readyState to update asynchronously, and allow me to read it immediately. However, when I run this, my browser freezes (I left it open for about half a minute before giving up).
I thought perhaps there was an onreadyStateChanged event, but I didn't see one in the MDN reference.
How should I be implementing this? Apparently an empty loop won't work, and there is no event for this.
This is simple and it work perfectly... you can add condition about maximal time, or number of try to make it more robust...
function sendMessage(msg){
// Wait until the state of the socket is not ready and send the message when it is...
waitForSocketConnection(ws, function(){
console.log("message sent!!!");
ws.send(msg);
});
}
// Make the function wait until the connection is made...
function waitForSocketConnection(socket, callback){
setTimeout(
function () {
if (socket.readyState === 1) {
console.log("Connection is made")
if (callback != null){
callback();
}
} else {
console.log("wait for connection...")
waitForSocketConnection(socket, callback);
}
}, 5); // wait 5 milisecond for the connection...
}
Here is a more elaborate explanation. First off, check the specific browser API, as not all browsers will be on the latest RFC. You can consult the
You don't want to run a loop to constantly check the readystate, it's extra overhead you don't need. A better approach is to understand all of the events relevant to a readystate change, and then wire them up appropriately. They are as follows:
onclose An event listener to be called when the WebSocket connection's readyState changes to CLOSED. The listener receives a CloseEvent named "close".
onerror An event listener to be called when an error occurs. This is a simple event named "error".
onmessage An event listener to be called when a message is received from the server. The listener receives a MessageEvent named "message".
onopen An event listener to be called when the WebSocket connection's readyState changes to OPEN; this indicates that the connection is ready to send and receive data. The event is a simple one with the name "open".
JS is entirely event driven, so you need to just wire up all of these events and check for the readystate, this way you can switch from WS to polling accordingly.
I recommend you look at the Mozilla reference, it's easier to read than the RFC document and it will give you a good overview of the API and how it works (link).
Don't forget to do a callback for a retry if you have a failure and poll until the callback for a successful reconnect is fired.
I am not using pooling at all. Instead, I use queuing.
First I create new send function and a queue:
var msgs = []
function send (msg) {
if (ws.readyState !== 1) {
msgs.push(msg)
} else {
ws.send(msg)
}
}
Then I need to read and send when the connection is first established:
function my_element_click () {
if (ws == null){
ws = new WebSocket(websocket_url)
ws.onopen = function () {
while (msgs.length > 0) {
ws.send(msgs.pop())
}
}
ws.onerror = function(error) {
// do sth on error
}
}
msg = {type: 'mymessage', data: my_element.value}
send(JSON.stringify(msg))
}
WebSocket connection in this example is created only on the first click. Usually, on second messages start to be sent directly.
Look on http://dev.w3.org/html5/websockets/
Search for "Event handler" and find the Table.
onopen -> open
onmessage -> message
onerror ->error
onclose ->close
function update(e){ /*Do Something*/};
var ws = new WebSocket("ws://localhost:9999/");
ws.onmessage = update;
If you use async/await and you just want to wait until the connection is available I would suggest this function :
async connection (socket, timeout = 10000) {
const isOpened = () => (socket.readyState === WebSocket.OPEN)
if (socket.readyState !== WebSocket.CONNECTING) {
return isOpened()
}
else {
const intrasleep = 100
const ttl = timeout / intrasleep // time to loop
let loop = 0
while (socket.readyState === WebSocket.CONNECTING && loop < ttl) {
await new Promise(resolve => setTimeout(resolve, intrasleep))
loop++
}
return isOpened()
}
}
Usage (in async function) :
const websocket = new WebSocket('...')
const opened = await connection(websocket)
if (opened) {
websocket.send('hello')
}
else {
console.log("the socket is closed OR couldn't have the socket in time, program crashed");
return
}
tl;dr
A simple proxy wrapper to add state event to WebSocket which will be emitted when its readyState changes:
const WebSocketProxy = new Proxy(WebSocket, {
construct: function(target, args) {
// create WebSocket instance
const instance = new target(...args);
//internal function to dispatch 'state' event when readyState changed
function _dispatchStateChangedEvent() {
instance.dispatchEvent(new Event('state'));
if (instance.onstate && typeof instance.onstate === 'function') {
instance.onstate();
}
}
//dispatch event immediately after websocket was initiated
//obviously it will be CONNECTING event
setTimeout(function () {
_dispatchStateChangedEvent();
}, 0);
// WebSocket "onopen" handler
const openHandler = () => {
_dispatchStateChangedEvent();
};
// WebSocket "onclose" handler
const closeHandler = () => {
_dispatchStateChangedEvent();
instance.removeEventListener('open', openHandler);
instance.removeEventListener('close', closeHandler);
};
// add event listeners
instance.addEventListener('open', openHandler);
instance.addEventListener('close', closeHandler);
return instance;
}
});
A long explanation:
You can use a Proxy object to monitor inner WebSocket state.
This is a good article which explains how to do it Debugging WebSockets using JS Proxy Object
And here is an example of code snippet from the article above in case the site won't be available in the future:
// proxy the window.WebSocket object
var WebSocketProxy = new Proxy(window.WebSocket, {
construct: function(target, args) {
// create WebSocket instance
const instance = new target(...args);
// WebSocket "onopen" handler
const openHandler = (event) => {
console.log('Open', event);
};
// WebSocket "onmessage" handler
const messageHandler = (event) => {
console.log('Message', event);
};
// WebSocket "onclose" handler
const closeHandler = (event) => {
console.log('Close', event);
// remove event listeners
instance.removeEventListener('open', openHandler);
instance.removeEventListener('message', messageHandler);
instance.removeEventListener('close', closeHandler);
};
// add event listeners
instance.addEventListener('open', openHandler);
instance.addEventListener('message', messageHandler);
instance.addEventListener('close', closeHandler);
// proxy the WebSocket.send() function
const sendProxy = new Proxy(instance.send, {
apply: function(target, thisArg, args) {
console.log('Send', args);
target.apply(thisArg, args);
}
});
// replace the native send function with the proxy
instance.send = sendProxy;
// return the WebSocket instance
return instance;
}
});
// replace the native WebSocket with the proxy
window.WebSocket = WebSocketProxy;
Just like you defined an onmessage handler, you can also define an onerror handler. This one will be called when the connection fails.
var socket = new WebSocket(url);
socket.onmessage = onmsg;
socket.onerror = function(error) {
// connection failed - try polling
}
Your while loop is probably locking up your thread. Try using:
setTimeout(function(){
if(socket.readyState === 0) {
//do nothing
} else if (socket.readyState !=1) {
//fallback
setInterval(poll, interval);
}
}, 50);
In my use case, I wanted to show an error on screen if the connection fails.
let $connectionError = document.getElementById("connection-error");
setTimeout( () => {
if (ws.readyState !== 1) {
$connectionError.classList.add( "show" );
}
}, 100 ); // ms
Note that in Safari (9.1.2) no error event gets fired - otherwise I would have placed this in the error handler.
I have been working on a project which requires the start and stop of cron scheduler when a user clicks on a button on the front end. Basically when a user clicks on a button, the cron job will start. And clicking the stop button will stop the timer. It is as simple as that.
To achieve that, I am making post requests to the Nodejs/Express backend on button click which triggers start/stop function of the scheduler. This is how the endpoint looks like:
const cron = require('node-cron');
router.post('/scheduler', async (req, res) => {
// gets the id from the button
const id = req.body.id;
try{
// finds the scheduler data from the MongoDB
const scheduler = await Scheduler.find({ _id: id });
// checks whether there is a scheduler or not
if ( !scheduler ) {
return res.json({
error: 'No scheduler found.'
});
}
// creates the cronjob instance with startScheduler
const task = cron.schedule('*/10 * * * * *', () => {
console.log('test cronjob running every 10secs');
}, {
scheduled: false
});
// checks if the scheduler is already running or not. If it is then it stops the scheduler
if ( scheduler.isRunning ) {
// scheduler stopped
task.stop();
return res.json({
message: 'Scheduler stopped!'
});
}
// starts the scheduler
task.start();
res.json({
message: 'Scheduler started!'
});
}catch(e) {
console.log(e)
}
});
Right now the scheduler runs perfectly but it doesn't stop on second button click. It keeps on running. I feel like I'm not calling task.start() and task.stop() at correct places where it would work. And I don't know where the correct places are. I'm actually new to cronjobs.
It would be great if someone tells me what I am doing wrong.
Thanks in advance.
Every time you hit the scheduler api a new instance of cron-job is made and you are stopping the newly defined instance of cron-job not the previous one.
Solution is to define the cron-job out of the scope of router so that whenever you hit the scheduler api the instance won't change
Like this:
const cron = require('node-cron');
// creates the cronjob instance with startScheduler
const task = cron.schedule('*/10 * * * * *', () => {
console.log('test cronjob running every 10secs');
}, {
scheduled: false
});
router.post('/scheduler', async (req, res) => {
// gets the id from the button
const id = req.body.id;
try{
// finds the scheduler data from the MongoDB
const scheduler = await Scheduler.find({ _id: id });
// checks whether there is a scheduler or not
if ( !scheduler ) {
return res.json({
error: 'No scheduler found.'
});
}
// checks if the scheduler is already running or not. If it is then it stops the scheduler
if ( scheduler.isRunning ) {
// scheduler stopped
task.stop();
return res.json({
message: 'Scheduler stopped!'
});
}
// starts the scheduler
task.start();
res.json({
message: 'Scheduler started!'
});
}catch(e) {
console.log(e)
}
});
The problem might come from the line:
const task = cron.schedule('*/10 * * * * *', () => {
which, actually, creates a new task and uses a new Scheduler if you read the source code of node-cron:
https://github.com/node-cron/node-cron/blob/fbc403930ab3165ffef7d53387a29af92670dfea/src/node-cron.js#L29
function schedule(expression, func, options) {
let task = createTask(expression, func, options);
storage.save(task);
return task;
}
(which, internally, uses: https://github.com/node-cron/node-cron/blob/fbc403930ab3165ffef7d53387a29af92670dfea/src/scheduled-task.js#L7:
let task = new Task(func);
let scheduler = new Scheduler(cronExpression, options.timezone, options.recoverMissedExecutions);
So, when you call:
task.stop();
As far as I understand, what you do is calling the method "stop" of a brand new task, not the method stop of the task you launched the first time you clicked the button.
Judging by your code, the problem is that you are not actually using your scheduler while using the task.
PS:
The module also exposes a function that lets you retrieve tasks from its storage: https://github.com/node-cron/node-cron/blob/fbc403930ab3165ffef7d53387a29af92670dfea/src/node-cron.js#L58
But as I haven't found any documentation about it, I do not recommend using it.
This is a long post, so I appreciate those who answer it. I am trying to understand the websocket communication in the blockchain example below.
Here is the source code for a node in a blockchain:
const BrewChain = require('./brewChain');
const WebSocket = require('ws');
const BrewNode = function(port){
let brewSockets = [];
let brewServer;
let _port = port
let chain = new BrewChain();
const REQUEST_CHAIN = "REQUEST_CHAIN";
const REQUEST_BLOCK = "REQUEST_BLOCK";
const BLOCK = "BLOCK";
const CHAIN = "CHAIN";
function init(){
chain.init();
brewServer = new WebSocket.Server({ port: _port });
brewServer.on('connection', (connection) => {
console.log('connection in');
initConnection(connection);
});
}
const messageHandler = (connection) =>{
connection.on('message', (data) => {
const msg = JSON.parse(data);
switch(msg.event){
case REQUEST_CHAIN:
connection.send(JSON.stringify({ event: CHAIN, message: chain.getChain()}))
break;
case REQUEST_BLOCK:
requestLatestBlock(connection);
break;
case BLOCK:
processedRecievedBlock(msg.message);
break;
case CHAIN:
processedRecievedChain(msg.message);
break;
default:
console.log('Unknown message ');
}
});
}
const processedRecievedChain = (blocks) => {
let newChain = blocks.sort((block1, block2) => (block1.index - block2.index))
if(newChain.length > chain.getTotalBlocks() && chain.checkNewChainIsValid(newChain)){
chain.replaceChain(newChain);
console.log('chain replaced');
}
}
const processedRecievedBlock = (block) => {
let currentTopBlock = chain.getLatestBlock();
// Is the same or older?
if(block.index <= currentTopBlock.index){
console.log('No update needed');
return;
}
//Is claiming to be the next in the chain
if(block.previousHash == currentTopBlock.hash){
//Attempt the top block to our chain
chain.addToChain(block);
console.log('New block added');
console.log(chain.getLatestBlock());
}else{
// It is ahead.. we are therefore a few behind, request the whole chain
console.log('requesting chain');
broadcastMessage(REQUEST_CHAIN,"");
}
}
const requestLatestBlock = (connection) => {
connection.send(JSON.stringify({ event: BLOCK, message: chain.getLatestBlock()}))
}
const broadcastMessage = (event, message) => {
brewSockets.forEach(node => node.send(JSON.stringify({ event, message})))
}
const closeConnection = (connection) => {
console.log('closing connection');
brewSockets.splice(brewSockets.indexOf(connection),1);
}
const initConnection = (connection) => {
console.log('init connection');
messageHandler(connection);
requestLatestBlock(connection);
brewSockets.push(connection);
connection.on('error', () => closeConnection(connection));
connection.on('close', () => closeConnection(connection));
}
const createBlock = (teammember) => {
let newBlock = chain.createBlock(teammember)
chain.addToChain(newBlock);
broadcastMessage(BLOCK, newBlock);
}
const getStats = () => {
return {
blocks: chain.getTotalBlocks()
}
}
const addPeer = (host, port) => {
let connection = new WebSocket(`ws://${host}:${port}`);
connection.on('error', (error) =>{
console.log(error);
});
connection.on('open', (msg) =>{
initConnection(connection);
});
}
return {
init,
broadcastMessage,
addPeer,
createBlock,
getStats
}
}
module.exports = BrewNode;
When a new block is created by the node with the createBlock() function, a message is broadcast from the node to all connected sockets with the broadcastMessage() function to tell them a new block has been created. The connected sockets will receive the message, and in messageHandler() it will hit the BLOCK option in the switch statement for each of them. I have a grasp of this process, and have drawn up a graph to show my understanding.
FIGURE 1
As stated earlier, when A creates a new block it will send the new block to it's connected nodes, where each node will verify it and possibly add it to it's chain. This processing is done by the processedRecievedBlock() function. Let's say B and C decide to add the block to their chain, but D is several blocks behind so it must request the whole chain from A. This is where I am confused. I expected that D would send a message back to A requesting the whole chain, like this:
FIGURE 2
However, according to the processReceivedBlock() function, in this situation D will broadcast a REQUEST_CHAIN message to all it's connected sockets, when this line is run:
broadcastMessage(REQUEST_CHAIN,"");
Let's say D is connected to E and F. Instead of requesting the chain from A like in FIGURE 2, it seems as though it will send the REQUEST_CHAIN message to it's connected sockets, like this:
FIGURE 3
In the messageHandler() function, the REQUEST_CHAIN option in the switch statement will be run for E and F, and they will hit this line of code:
connection.send(JSON.stringify({ event: CHAIN, message: chain.getChain()}));
It is my understanding that this will cause E and F to send their own chain back to themselves, like this:
FIGURE 4
I want to know why FIGURE 2 does not occur when D needs to request the whole chain from A. Tracing the code has had me believe that FIGURE 3 and FIGURE 4 occur instead, neither of which seem to be useful.
I am trying to find an understanding of what exactly happens in this code when a node must request the whole chain from another node. I must be misunderstanding what these sockets are doing.
Complete source code: https://github.com/dbjsdev/BrewChain/blob/master/brewNode.js
Thanks for a descriptive question. :)
You are right for the most part and Figure 3 is the correct description of that part of the process. But Figure 4 is wrong.
Note that every socket connection between peers leads to a different instance of connection which are collectively maintained in brewSockets.
So, when A/E/F receive a request on connection from D, they respond with the whole chain, as in the code below:
connection.send(JSON.stringify({ event: CHAIN, message: chain.getChain()}));
D then processes the CHAIN message:
const processedRecievedChain = (blocks) => {
let newChain = blocks.sort((block1, block2) => (block1.index - block2.index))
if(newChain.length > chain.getTotalBlocks() && chain.checkNewChainIsValid(newChain)){
chain.replaceChain(newChain);
console.log('chain replaced');
}
}
Now, onto the 'why'!
Firstly, the underlying principle is that we trust in the network, not just one node. So, you want to verify the authenticity of the chain from as many sources as possible.
Secondly, you want the latest chain from your peers not just any random chain.
By doing so, we ensured that any node is as latest as its peers. So, D node fetched chain from multiple sources and stored the latest verified chain.
Hope that helps!
I have an node.js socket.io application where I have a few different events and listeners. Right now this is how I am doing it.
class testEmitterClass extends events {
}
const testEmitter = new testEmitterClass();
io.on('connection', function (socket) {
console.log('connected');
let dnsInactiveTermsListener = function (dnsInactiveTerms) {
socket.emit(socketEvents.DNS_INACTIVE_TERMS, dnsInactiveTerms);
};
let checkpointInactiveTermsListener = function(checkpointInactiveTerms) {
socket.emit(socketEvents.CHECKPOINT_INACTIVE_TERMS, checkpointInactiveTerms);
};
let dnsActiveTermsListener = function (dnsActiveTerms) {
socket.emit(socketEvents.DNS_ACTIVE_TERMS, dnsActiveTerms);
};
let checkpointActiveTermsListener = function(checkpointActiveTerms) {
socket.emit(socketEvents.CHECKPOINT_ACTIVE_TERMS, checkpointActiveTerms);
};
let dnsCountListener = function (dnsCountStreaming) {
socket.emit(socketEvents.DNS_COUNT, dnsCountStreaming);
};
testEmitter.on(socketEvents.CHECKPOINT_ACTIVE_TERMS, checkpointActiveTermsListener);
testEmitter.on(socketEvents.DNS_INACTIVE_TERMS, dnsInactiveTermsListener);
testEmitter.on(socketEvents.CHECKPOINT_INACTIVE_TERMS, checkpointInactiveTermsListener);
testEmitter.on(socketEvents.DNS_ACTIVE_TERMS, dnsActiveTermsListener);
testEmitter.on(socketEvents.DNS_COUNT, dnsCountListener);
socket.on('disconnect', function () {
console.log('disconnected');
testEmitter.removeListener(socketEvents.DNS_INACTIVE_TERMS, dnsInactiveTermsListener);
testEmitter.removeListener(socketEvents.DNS_ACTIVE_TERMS, dnsActiveTermsListener);
testEmitter.removeListener(socketEvents.DNS_COUNT, dnsCountListener);
testEmitter.removeListener(socketEvents.CHECKPOINT_INACTIVE_TERMS, checkpointInactiveTermsListener);
testEmitter.removeListener(socketEvents.CHECKPOINT_ACTIVE_TERMS, checkpointActiveTermsListener);
})
});
The testemitter is a single instance which is emitting events somewhere else and being sent to the client using socket.io
Is there a way to maintain single list of the listeners somewhere so that this code can be maintained better? How can I map events to the listeners so that they can be added and removed as a client disconnected from socket.io without making a mess.
socketEvents is just an object of event names.
const DNS_COUNT = 'dnsCount';
const DNS_INACTIVE_TERMS = 'dnsInactiveTerms';
const DNS_ACTIVE_TERMS = 'dnsActiveTerms';
const CHECKPOINT_INACTIVE_TERMS = 'checkpointInactiveTerms';
const CHECKPOINT_ACTIVE_TERMS = 'checkpointActiveTerms';
module.exports = {
DNS_COUNT,
DNS_INACTIVE_TERMS,
CHECKPOINT_INACTIVE_TERMS,
DNS_ACTIVE_TERMS,
CHECKPOINT_ACTIVE_TERMS
};
Hope I made myself clear, thanks!
I think you can change the whole way you do things. Rather than register an event handler for every single socket that connects, you can just broadcast the message to all connected sockets. So, I think you can replace everything you show with just this:
class testEmitterClass extends events {
}
const testEmitter = new testEmitterClass();
const notifications = [
CHECKPOINT_ACTIVE_TERMS,
DNS_INACTIVE_TERMS,
CHECKPOINT_INACTIVE_TERMS,
CHECKPOINT_INACTIVE_TERMS,
DNS_COUNT
];
for (let msg of notifications) {
testEmitter.on(socketEvents[msg], function(data) {
// send this message and data to all currently connected sockets
io.emit(socketEvents[msg], data);
});
}
Also notice that the code has been DRYed by using a table of messages that you can loop through rather than repeating the same statements over and over again. So, now to add, remove or edit one of your notification messages, you just modify the table in one place.
If socketEvents (which you don't show) is just an object with these 5 properties on it, then you could even remove the notifications array by just iterating the properties of socketEvents.
That would further reduce the code to this:
class testEmitterClass extends events {
}
const testEmitter = new testEmitterClass();
for (let msg of Object.keys(socketEvents)) {
testEmitter.on(socketEvents[msg], function(data) {
// send this message and data to all currently connected sockets
io.emit(socketEvents[msg], data);
});
}
I've got some code that looks like this (this is an excerpt of a much larger project):
this.oldMessagesSubject = new Subject<Message[]>();
this.oldMessages = this.oldMessagesSubject.asObservable();
getOldMessages(): Observable<Message[]> {
this.chatHub.server.getMessages();
return this.oldMessages;
}
I then subscribe to it like this:
this.chatService.getOldMessages()
.subscribe(res => this.messages = res);
I'm not very experienced with ReactiveX, but when I subscribe to the observable returned by getOldMessages(), it iterates through each of the values that it has received since the app started. So if I get the old messages for 'chat 1' that works fine. But if I then navigate to a different page and get the old messages for 'chat 2' the observable emits the messages for 'chat 1' and then the messages for 'chat 2'.
I have a feeling I'm using the Observables in the wrong way and I'd really appreciate any help.
EDIT:
This is where next is called:
this.chatHub.client.showMessages = (messages: Message[]) => this.oldMessagesSubject.next(messages);
getMessages is an RPC to the server. Here is the relevant method on the server (C#):
public Task GetMessages()
{
try
{
// Get the current user
var user = repo.GetUser(Context.User.Identity.Name);
// Associate the current user with a connection
var connection = chatContext.Find(x => x.UserId == user.UserId);
if (connection != null)
{
// Get all the messages in the user's chat (encrypted)
List<EncryptedMessage> encryptedMessages = repo.GetMessages(connection.ChatId);
List<Message> messages = new List<Message>();
// Get the decrpytion key
byte[] key = Encoding.Default.GetBytes(ConfigurationManager.AppSettings["secret"]).Take(16).ToArray();
// Decrypt the messages
foreach (var encryptedMessage in encryptedMessages)
{
Message message = new Message();
message.Id = encryptedMessage.Id;
message.GroupId = encryptedMessage.GroupId;
message.Owner = encryptedMessage.Owner;
message.Sent = encryptedMessage.Sent;
message.Body = cryptoProvider.DecryptMessage(encryptedMessage.Body, key);
messages.Add(message);
}
// Return the messages to the client
return Clients.Caller.ShowMessages(messages);
}
return Clients.Caller.LoginError();
}
catch (Exception ex)
{
return Clients.Caller.Exception(ex);
}
}
When I'm back at my workstation, I'll debug and check that the problem isn't server side. I should be able to tell if next is being called multiple times on each subscription.