websocket client discovering server - javascript

I am in a situation where I want my websocket client to connect to server but server ip or dns name is unknown. Both client and server are in local network(connected to same router). I tried something like this....
var socket;
for(var i=1; i<255; i++) {
socket = new WebSocket('ws://192.168.1.'+i+':8080/service');
socket.onopen = function () {
console.log('WebSocket Connected!!');
};
socket.onclose = function (event) {
console.log('WebSocket Disconnected!!');
socket.close();
};
socket.onmessage = function (event) {
console.log('WebSocket receive msg: ' + event.data);
}
}
This works but I am not sure if I am doing it right or if there is a better way to do it. Any help is appreciated.

Have you tried hooking up an onerror listener to see what errors are being thrown? It's possible that you're finding the server but that some mis-configuration in the server is causing it to error out before the connection is opened.
WebSockets is still a very actively evolving standard. There are multiple drafts out there and some browsers support some drafts but not others. Old drafts are often considered insecure, so some browsers don't support them, while other browsers only support the old drafts because they haven't been updated for the newer ones. Also, servers may be in the same boat. It's kind of the wild west.
I suggest putting in robust error handling and also fallback when things aren't working right. Packages like Socket.io offer this kind of transparent fallback support. I suggest checking that out if you're looking for a quick solution. However, if you're just using this as a learning experience (and I encourage such behavior!), you will want to hook up an onerror handler to see what's going on and why each connection fails.

This solution would of course scale horribly when you deploy on a network larger than class A. And when there is more than one websocket server on the network, you wouldn't know which one to hit.
But as long as there is no DNS and no static IP addresses on your network, port-scanning the whole IP range is the only way to find the server.
Are you sure it's impossible to assign a static IP address to the server machine? Most consumer grade routers don't strictly enforce the use of DHCP and usually it's not a problem when only some machines are configured with static IP addresses. Some router firmwares also allow to configure the DHCP server to always assign the same IP address to specific MAC addresses.

Related

How to reconnect socket after a passive network switch

So I have a Swift client, Node.js server, and am using socket.io. I have an issue where when the user changes from WiFi to LTE (passively, if they turn off wifi manually it works fine) while being connected to the server, for some reason they don't reconnect to the server (just hit a ping timeout). I've tried increasing ping timeout to 50 seconds with no effect. My users interact with each other while being connected to the same room so this is a big issue.
My connection code on the client-side looks like this:
var socket: SocketIOClient?
fileprivate var manager: SocketManager?
func establishConnection(_ completion: (() -> Void)? = nil) {
let socketUrlString: String = serverURL
self.manager = SocketManager(socketURL: URL(string: socketUrlString)!, config: [.forceWebsockets(true), .log(false), .reconnects(true), .extraHeaders(["id": myDatabaseID])])
self.socket = manager?.defaultSocket
self.socket?.connect()
//self.socket?.on events go here
}
On the client side, my connection code looks like:
const io = require('socket.io')(http, {
pingTimeout: 10000
});
io.on('connection', onConnection);
function onConnection(socket){
let headerDatabaseID = socket.handshake.headers.id
//in the for loop below, I essentially disconnect any socket that has the same database ID as the one that just connected (in this case, when a client is in a room with other clients,
//and then he/she switches from WiFi to LTE, the client reconnects to the socket server and this removes the old connection from the room it was in)
for (let [id, connectedSocket] of io.sockets.sockets) {
if (connectedSocket.databaseID == headerDatabaseID && socket.id != id) {
connectedSocket.disconnect()
break
}
}
//socket.on() events here
}
My issue is this--how do I go about reconnecting the client when it makes the passive network switch (WiFi -> LTE or vice versa)? I thought that just adding .reconnects(true) would work but for some reason, it's not...
Please let me know if I can be more detailed/helpful or if you'd like to see other codes.
I believe the solution to you problem can be either simple or complex; that depends on your requirements. I assume that each chat room has its own ID.
If you store that ID in memory on the device, when the user reconnects, you can have the socket reconnect to the room ID you had last and they will re-join that room. This is insecure.
If rooms are protected and not public, someone may be able to connect to a room that they are not allowed in if they know/can guess the room ID. To solve that problem, you'd need to implement some sort of authentication or server side database that keep keep track of that sort of stuff.
Considering the behavior varies based on whether the handoff is manual or passive it sounds like the issue is on the iOS client. I notice that you are using sockets - it seems to be some sort of custom sockets package, right? Is there a reason for using this? URLSession is a higher level implementation and it manages things like handoff.
There is something called Wifi assist, developed by apple, to manage handoff. It is part of the OS and manages this internally. According to apple: "Using URLSession and the Network framework already gives us the new WiFi assist benefits.". This was released in iOS 9, in Sept 2015.
But if you are using some other kind of sockets, whatever this "socketIOClient" is - especially packages developed prior to Sept 2015, you are probably bypassing Wifi assist. The latest version of SocketIO client I see was written in 2015 and it appears support for this package was discontinued when iOS 9 came out.
When the user manually changes the connection this is manually prompting the OS to tear down & reestablish the connection, whereas with passive tradeoff it normally relies on this Wifi Assist.
You could try to programmatically tear down & reestablish the connection when you detect that a passive handoff has occurred, but I wouldn't recommend this... for starters, it will make your code much messier. It will probably degrade the user experience. But worse, this may not be the only problem you run into using this outdated socketIO package. There's really no telling what kind of maintenance problems you will wind up with. Better to just refactor your code to use the up to date networking mechanisms provided by iOS.
If .reconnects(true) isn't working, you can try to manually take care of the problem with Apple's Reachability. This may make it easier - it's the Reachability functionality "re-written in Swift with closures."
In your case, you might use it as such:
let reachability = try! Reachability()
reachability.whenReachable = { reachability in
if reachability.connection == .wifi {
print("Reachable via WiFi")
self.socket.disconnect();
establishConnection() //this is your method defined in the question
} else {
print("Reachable via Cellular")
self.socket.disconnect();
establishConnection() //this is your method defined in the question
}
}
reachability.whenUnreachable = { _ in
print("Not reachable")
}
do {
try reachability.startNotifier()
} catch {
print("Unable to start notifier")
}

Keeping Web Socket Server Alive

Zup coders. I've implemented a simple website that uses Web Sockets PHP (Consik Yii2 solution: https://github.com/consik/yii2-websocket) vs JS (Html5).
Everything is working fine, I only have one issue with my solution, making sure the server is always alive.
I though about saving the WebSocket Instance into Cache and throw a cron that checks the state of the instance. I installed memcached and found out that i can´t save a serialized version of the WebSocket Server instance. ¿Is this a good solution? ¿Would Redis Caché fix this?
I also thought about using client side JS to react to "Error during WebSocket handshake: Unexpected response code: 200" but i can't seem to get it working. I also don't like making the URL that starts websockets public.
Ex:
connect = function(){
websocket = new WebSocket(webSocketURL);
websocket.onerror = function(){
$.get( "/startWebSocketServer",
function(data){
connect();
}
);
};
};
connect();
Thanks!
I think that as matter of fact you need a process supervisor who takes care to "supervise" your server process and do actions in response of process/system events like crash, restart etc..
There are several solutions for each case (standard OS implementations, personal preferences, fit your need), here a list http://without-systemd.org/wiki/index.php/Init , Service managers section could best fit your needs.
Supervisord is easy to setup and configure, it could be a good start thanks to a good bunch of examples around the net.
Solution 1: using a cache could not be the most orthodox way to implement a custom-made supervisor.
Solution 2: is legit as long as it informs user about a problem, the call to an exposed endpoint to start a service IMHO could be a security flaw.

Websockets not connected behind proxy

This is quite common problem, but I cannot find a solution to my specific case. I'm using Glassfish 4.1.1 and my application implements Websockets.
On a client side I'm connecting to WS-server simply by:
var serviceLocation = "ws://" + window.location.host + window.location.pathname + "dialog/";
var wsocket = new WebSocket(serviceLocation + token_var);
On a server side websockets are implemented via #ServerEndpoint functionality and looks very common:
#ServerEndpoint(value = "/dialog/{token}", decoders = DialogMessageDecoder.class)
public class DialogWebsoketEndpoint {
#OnOpen
public void open(final Session session, #PathParam("token") final String token) { ... }
etc.
}
Everything works fine up to the moment when customer tries to connect behind proxy.
Using this test: http://websocketstest.com/ I've found that computer of the customer works behind http-proxy 1.1.
He cannot connect to websockets, onopen simply do not fire at all. wsoscket.readyState never become 1.
How can I tune my ServerEndpoint to make this code work even when customer is connecting behind proxy?
Thank you in advance!
UPDATE: I would provide a screenshot with websocketstest at that computer:
On my computer it seems similarly except one thing:
HTTP Proxy: NO.
Much as the comments to the questions state, it seems the Proxy doesn't support Websockets properly.
This is a common issue (some cell-phone companies have proxies that disrupt websocket connections) and the solution is to use TLS/SSL connections.
The issue comes up mainly because some proxies "correct" (read: corrupt) the Websocket request headers.
However, when using TLS/SSL, the proxies can't read the header data (which is encrypted), causing data "pass-through" on most proxies.
This means the headers will arrive safely at the other end and the proxy will (mostly) ignore the connection... this might still cause an issue where connection timeouts are concerned, but it usually resolves the issue.
EDIT
Notice that the browsers will protect the client from mixing non-encrypted content with encrypted content. Make sure the script initiates the ws connections using the wss variant when TLS/SSL connections are used.

Websocket can't send/receive messages on Chrome/Firefox, works fine on Microsoft Edge

I'm making a remote debugging tool for Unity(C#), and I've set up a C# WebSocket server in the game that emits Log messages.
The remote debugging client is in JavaScript, on a page served by an http server also created by the game.
I seem to be running into issues sending messages on some browsers, and I'm not sure why. I am running the websocket server on localhost and running the client locally, and I know that kind of stuff is not really liked by chrome/firefox. But the weird thing is that I'm not getting any hard errors or exceptions. Failures seem to fail silently.
I'm pretty certain that the issue is JS/Browser related as the C# websocket server works and receives connections in all cases.
Anyway, here's the socket part of the JS code:
var socket = null;
var host = "ws://"+window.location.hostname;
var port = 55000;
var url = host+":"+port+"/msg";
function CheckSocketStatus()
{
if(socket!=null){
console.log(socket.readyState);
}
}
function CreateSocket()
{
socket = new WebSocket(url);
socket.onopen = function()
{
// // Web Socket is connected, send data using send()
console.log("Socket Open!");
socket.send("Here's a client message for ya!");
};
socket.onmessage = function (evt)
{
var message = evt.data;
console.log("MSG: " + message);
var obj = JSON.parse(message);
console.log(obj)
console.log(obj.type)
if(obj.type == "log"){
console.log("Recieved Log");
handleLogMessage(obj);
}
};
socket.onerror = function()
{
console.log("Error!");
}
socket.onclose = function(event)
{
// websocket is closed.
console.log(event.code);
console.log("Connection is closed...");
socket = null;
};
}
In all cases, when I call CreateSocket() a socket gets created and successfully connects to the server. I also have that CheckSocketStatus() function which returns "1" after the socket opens (Which should mean open/ready to send/receive). After that, here are the results:
Chrome:
Chrome will immediately close after connecting. The only thing I do in the onopen() function is a console.log() and a send(). If I remove the send() then the socket will stay open. I do not receive any messages from the server.
Firefox:
Firefox will keep the socket open indefinitely even if I call the send() function in onopen(). However, the server does not receive any messages from the client and vice versa. I feel like I managed to it to send client->server earlier but I could not reproduce that while testing for this question.
Microsoft Edge:
Weirdly enough, Edge works just fine. I can receive and send messages. Works exactly as intended.
Node Webkit (nw.js):
I'm also trying to write this as a nw.js app. Predictably, as it's running on chromium (or something googly), it produces the same results as Chrome.
So I'm not really sure what's going on. I'm not really a web programmer so intricate http stuff is not really my forte. I'm really hoping it's just a Local file issue with chrome/firefox and that it'll work fine on those platforms if I'm connecting to an external host. I'll try to test this tomorrow at work with some non-localhost server, and I'll update with my findings.
I guess the answer I'm looking for is what these symptoms point to and how I can get chrome/firefox/webkit to work properly.
Also what does Edge do here that the others do not?
Thanks in advance! If you need any more info from me please just ask! I didn't want to overload this question just in case there's a simple answer.
Update:
So I just tried connecting from my laptop to my desktop and the same issues still persist. So to my surprise it's not a local issue. I'm a bit stumped. I might have to look at the server code as well. I've also been told to try to use a wrapper, like socket.io, that might solve some platform dependent issues.I've worked with Socket.io/Unity before but I don't think I was having these issues (I wasn't running a server on the C# side that time, there don't seem to be any good socket.io server implementations on C#, and I'm not sure if socket.io interfaces with normal websockets). So that might point to a problem with my implementation on the C# side.
So I figured it out, thanks to gman. I looked at some of his code and noticed that he used a setting in his WebSocketBehavior class called "Ignore Extensions".
The websocket-sharp documentation has this to say:
"If it's set to true, the service will not return the Sec-WebSocket-Extensions header in its handshake response."
"I think this is useful when you get something error in connecting the server and exclude the extensions as a cause of the error."
So I guess that that header did not jive well with Chrome/Firefox. I'm still doing some testing but this solved the behavior I was seeing with those browsers.
So if you get similar errors, do that!

block access from subdomains in socket.io

I'm trying to allow access to socket.io only if the website the connection is coming from is one of the whitelisted subdomains on my server. The best would be if I could check the origin subdomain everytime a client connects to my socket.io server. I tried finding out how to do it, but haven't found a good solution yet.
The only thing that comes close to a solution is this answer to a related question. However - I'm not sure if that's the best way to do it and if that even works in my case and can't be faked via javascript.
TLDR: How do I treat socket.io requests differently based on their origin? If that's not possible: How do I host two socket.io servers on two subdomains, but same port?
Regarding duplicate flag: My Question is entirely different. I cannot use namespaces as a solution since I can't trust the client side javascript running on some subdomains. Therefore these subdomains could just join a different namespace, which would make my efforts to separate them pointless.
I found this answer with the help of some guy on the socket.io slack server.
io.on('connection', function (socket) {
var subdomain = socket.handshake.headers.host.split('.')[0];
if (subdomain === 'allowed') {
socket.on('login', /* ... */);
} else {
socket.on('login', function () {
console.log('login disabled, wrong subdomain');
});
}
}
I don't know it it's reliable or can be modified by malicious client javascript, but it was looking quite good while I was testing it.
I added the this modifie code to the express/socket.io middleware function so it gets called on every request: connect, disconnect, and streaming.
I also use the express-subdomain npm
https://www.npmjs.com/package/express-subdomain
app.sio.use((socket, next) => {
var subdomain = socket.request.headers.host
sessionMiddleware(socket.request, {}, next)
})

Categories