This is my server side websocket script:
var clients = [ ];
//sample request: ****:8080/?steamid=123456789
var connection;
var aqsteamid = getParameterByName("steamid",request.resource);
connection = request.accept(null, request.origin);
connection.ID = aqsteamid;
connection.balRefreshes = 0;
connection.clientIndex = clients.push(connection) - 1;
//check if this user is already connected. If yes, kicks the previous client ***====EDITED====***
for(var i = 0; i < clients.length; i++)
{
if(clients[i].ID === aqsteamid){
var indx = clients.indexOf(clients[i]);
clients[indx].close();
}
}
console.log('ID',connection.ID,' connected.');
socket.on('close', function(webSocketConnection, closeReason, description){
try{
console.log('ID',webSocketConnection.ID,'disconnected. ('+closeReason+';'+description+')');
webSocketConnection.balRefreshes = 0;
webSocketConnection.spamcheck = false;
clients.splice(webSocketConnection.clientIndex, 1);
}catch(e)
{
console.log(e);
}
});
Basically what I want is to kick all connections with same ID (for example, connecting with multiple browser tabs).
But, instead of kicking the old client, it kicks both clients or in some cases both clients remain connected with same ID.
Is there any other way or is there any mistakes in my script?
Thanks
using an object instad of Array to key the clients pool makes it faster and simpler:
var clients = {};
//sample request: ****:8080/?steamid=123456789
var connection;
var aqsteamid = getParameterByName("steamid",request.resource);
connection = request.accept(null, request.origin);
connection.ID = aqsteamid;
connection.balRefreshes = 0;
clients[aqsteamid]=connection;
socket.on('close', function(webSocketConnection, closeReason, description){
try{
console.log('ID',webSocketConnection.ID,'disconnected. ('+closeReason+';'+description+')');
webSocketConnection.balRefreshes = 0;
webSocketConnection.spamcheck = false;
delete clients[aqsteamid];
}catch(e)
{
console.log(e);
}
});
//check if this user is already connected. If yes, kicks the previous client
if(clients[aqsteamid]) clients[aqsteamid].close();
console.log('ID',connection.ID,' connected.');
With an object pool, we can remove all the array pool looping and comparing logic, and our index will never get out of sync.
It sounds like multiple connections with the same ID could be part of a genuine workflow with multiple tabs (unlike, say, malicious users that intentionally scrape data w/multiple threads...)
Rather than "kicking" the users from other tabs and then having to deal with them re-connecting, a more elegant solution would be to introduce an orchestration layer across multiple tabs.
You can rely on localstorage api to elect a master tab that will handle communications with the server (doesn't really matter if it's websocket or ajax) and share responses with other tabs - again, through localstorage. It doesn't really matter if you have 1 or 20 tabs open when you can share that data since you care about same message notifications, or stock ticker updates, or whatever.
From another stackoverflow answer:
The storage event lets you propagate data between tabs while keeping a
single SignalR connection open (thereby preventing connection
saturation). Calling localStorage.setItem('sharedKey', sharedData)
will raise the storage event in all other tabs (not the caller):
$(window).bind('storage', function (e) {
var sharedData = localStorage.getItem('sharedKey');
if (sharedData !== null)
console.log(
'A tab called localStorage.setItem("sharedData",'+sharedData+')'
);
});
Given the code above, if sharedKey value is already available when the page is loaded, assume a master tab is active and get shared values from localstorage. You can check if a master tab re-election is needed (i.e. that browser tab has been closed or navigated away) with an interval or relying on something more sophisticated like page visibility api.
Note you're not limited to sharing "same" data across multiple tabs but instead batch any requests over a shared channel.
Related
With the following code i open a stream connection to the Binance crypto exchange:
let adress = 'wss://stream.binance.com:9443/ws/btcusdt#kline_1h';
const ws = new WebSocket(adress);
If i make this call for different crypto currencys then i have later a few streams open, i want to know how can i check the current open streams, is there something like a function or parameter where i can see for which currencys i have a open stream running?
because i also have the problem that it looks like streams are stopping sometimes and i dont have a good solution for checking which streams have stop and how to receonnect them. My idea is now to first find a way how to check which streams are running and then maybe if one streams is stop i will just send the connection request again.
In javascript there is a websocket events thing, so all you need is to
ws.onclose = function(event) {
ws = new WebSocket(adress);
//just reopen it
};
Or, for more safety, you can
ws.onclose = function(event) {
if (event.wasClean) {
ws = new WebSocket(adress);
} else {
console.log('Connection error!');
//or whatever u want
}
};
Sorry for this stupid styling, I'm newbie there
If you have your ws variable, then checking whether the websocket is open and alive is done with
if(ws && ws.readyState === 1){
// is opened
}
For other states of the websocket, see the docs.
If you want to receive push messages from the server, you need to keep the ws connection open. If not, you can close the ws after a query and reopen it then for another query. You should wait for the closed state ws.readyState === 3 before reopening.
If you need to keep all ws connections open, then you need a list of ws Objects. You push new objects to the list:
let ws_list = [] // global list of ws objects
let create_connection = function(url){
try{
ws_list.push(new WebSocket(url));
} catch(err){
console.log(err, url);
}
}
let do_something = function(){
for(let ws of ws_list){
// do something with the ws object
}
}
I need to perform a load testing on our websocket service.
Is there a way to open multiple websocket connections from a single workstation?
I already tried npm ws and websocketmodules to launch them with NodeJS. It works fine with a single connection, but when I try to open them in a for-loop it throws an exception or uses only last open client.
If you want to do it from a single workstation, you can use child processes :
app.js :
var cp = require('child_process');
var children = [];
var max = 10; // tweak this to whatever you want
for(var i = 0; i<max; i++){
children[i] = cp.fork('./child.js');
.on('error', function(e) {
console.log(e);
}).on('message',function(m){
console.log(m)
}).on('exit', function(code) {
console.log('exit with code',code);
});
}
then in child.js, just have a script that starts a connection. you can also send messages between parent and child, using the child process API
I had built interception calls in the socket.io file located under node_modules/socket.io/lib/client.js with version 1.3.7 (at least I think so, however I have to update to 1.4.5 because of other requirements). These changes allowed spoof information coming from the sender socket and they were done before continuing to send the data to the receiver socket.
Before (around 1.3.7), the method ran before sending a packet was the following:
Client.prototype.packet = function(packet, preEncoded, volatile){
var self = this;
var sockets = this.sockets[0]; //this helds the socket object
but now (1.4.5) socket.io changed its call to the following
Client.prototype.packet = function(packet, opts){
var sockets = this.sockets[0]; //gives undefined
I tried to look throughout the given objects but couldn't find the sockets of the receiver user.
Back in 1.3.7 I was able to effortlessly give properties to a socket object (e.g: socket.some-property = 1; in the .js file ran by nodejs in the root of the server) and later be able to get this some-property back in node_modules/client.js whenever the receiver got some packet so I could intercept the call but now it does not work and I would like to apply my old code to this new context in order for it all to function again.
var socketObject = {};
io.sockets.on('connection', function (client) {
socketObject[client.id] = {socket: client};
client.on('data', function (somedata) {
socketObject[client.id].data = someData;
});
client.on('disconnect', function() {
delete socketObject[client.id];
});
});
I have recently developed a web app using PeerJS, and am trying to add reconnect functionality.
Basically, my app works by someone creating a server that clients then connect to. The server person can control what the hosts are doing but its basic two way communication.
If a client disconnects they simply reconnect and it works normally. However if the server user refreshes the page, or their computer crashes then they need to be able to re-establish control over the clients.
The start of this is by regaining the original connection ID and peer api ID, which is fine and easy as they are stored in a database and assigned a unique ID the server user can use to query them. Then to enable the client to reconnect I do this upon close:
// connection is closed by the host involuntarily...
conn.on('close', function() {
// if the clients connection closes set up a reconnect request loop - when the host takes back control
// the client will auto reconnect...
connected = false;
conn = null;
var reconnect_timer = setInterval(function () {
console.log('reconnecting...'); // make a fancy animation here...
conn = peer.connect(connectionid, {metadata: JSON.stringify({'type':'hello','username':username})});
// upon connection
conn.on('open', function() { // if this fails need to provide an error message... DO THIS SOON
// run the connect function...
connected = true;
connect(conn);
});
// didnt connect yet
conn.on('error', function(err) {
connected = false;
});
if(connected === true) {
clearInterval(reconnect_timer);
}
}, 1000);
});
This appears to work, as on the server end the client looks like they have reconnected - the connect function has fired etc. However messages cant be sent between, and the client console says:
Error: Connection is not open. You should listen for the `open` event before sending messages.(…)
Where the 'open' event is shown as having been listened to above...
I hope this is clear - any help is appreciated :)
So in the end to create an auto reconnect script, I simply dealt with the client end of things, ensuring the server was set to the same api_key (for cloudservers) and key:
peer = new Peer(return_array.host_id, {key: return_array.api_key});
and then having the client, upon connection closing:
// connection is closed by the host involuntarily...
conn.on('close', function() {
// if the clients connection closes set up a reconnect request loop - when the host takes back control
// the client will auto reconnect...
peer.destroy(); // destroy the link
connected = false; // set the connected flag to false
conn = null; // destroy the conn
peer = null; // destroy the peer
// set a variable which means function calls to launchPeer will not overlap
var run_next = true;
// periodically attempt to reconnect
reconnect_timer = setInterval(function() {
if(connected===false && run_next===true) {
run_next = false; // stop this bit rerunning before launchPeer has finished...
if(launchPeer(false)===true) {
clearInterval(reconnect_timer);
} else run_next == true;
}
}, 1000);
});
Where launch peer will attempt to launch a new peer. To ensure continuity the new id from the client replaces the old id from the client and everything is a smooth takeover. The hardest part in the end was having the "setInterval" only fire once which is achieved (badly...) through use of boolean flags.
Thanks to anybody who read and thought how they could help :)
I currently forced to create a new RabbitMQ connection every time a user loads a page on my website.
This is creating a new TCP connection every time. However, i'm trying to reduce the number of TCP connections i make to Rabbit with the NodeJS AMQP plug in. Here is what i have:
var ex_conn = get_connection(uri); //http:rabbitm.com
if(ex_conn == false) {
var tempConn = amqp.createConnection({
url: uri
});
connections.push({
host: uri,
obj: tempConn
});
}
else {
var tempConn = ex_conn.obj;
}
The issue i'm running into is that if i try to do:
tempConn.on('ready', function() {
});
Then the ready function does not get triggered. I'm assuming, that is because the ready call back was already defined and it is not going to be re triggered. What i'm looking to do is bind a new queue by doing:
tempConn.queu('', {});
Any thoughts on how to get around this issue is much appreciated.
thanks.