Detect connection-loss immedietly - Websocket - JavaScript - javascript

Is there a way to trigger a function immediately after the loss of the internet connection by a WebSocket.
socket.onClose and socket.onError does not trigger an immediate response that caters to my requirement.

There are some situations in which the OS itself doesn't know that the connection was lost (such as network failures). This is sometimes known as half-open connections.
Since the OS doesn't know about the connection being closed, the browser or node server isn't notified and your WebSocket callback isn't called.
There's a nice blog article about half-open connections here.
You could mitigate the issue, for example by:
closing the socket yourself under suspicious circumstances (i.e., manually close the socket when the page loses focus).
implementing a client-side ping. The failure to send the ping would indicate to the program (and the OS) that the connection was lost, resulting in the onclose callback being called.
These options will always suffer from some delay. Network loss detection isn't easy.

Related

When is it preferable to use ws.terminate() VS ws.close()

I know terminate hard closes a socket not letting it hang or send more packets like close does, and terminate sends a different code when closing a socket (1006 vs close sending 1000). So this raises the question, why would I even used ws.close() when I dont plan on reopening a socket? Ive seen so many examples that just use ws.close() and never reopen it. Is it just not well known or standard or is there something behind the scenes Im not aware of?
Closing the web socket will wait for buffered messages to be finished, and initiate the closing handshake. This includes an optional close code and an optional close reason. It tells client that the websocket is now closed and no further messages will be sent, and once the other end acknowledges that, the tcp connection will be closed. See 1.4 Closing handshake:
The closing handshake is intended to complement the TCP closing handshake (FIN/ACK), on the basis that the TCP closing handshake is not always reliable end-to-end, especially in the presence of intercepting proxies and other intermediaries.
By sending a Close frame and waiting for a Close frame in response, certain cases are avoided where data may be unnecessarily lost. For instance, on some platforms, if a socket is closed with data in the receive queue, a RST packet is sent, which will then cause recv() to fail for the party that received the RST, even if there was data waiting to be read.
This is considered a clean closure. The close code informs the status on the other end, or leads to status 1005 if omitted.
Terminating the web socket will just drop the connection. It calls socket.destroy(), no close frames are sent or anything, not even the tcp connection is closed cleanly with FIN. This leads to status 1006 on the other end, after it has run into a timeout.
The downside of having a closing handshake in the protocol is that a closing connection might be left hanging in a limbo state when the other end is no longer responding at all. Only a timeout will end that, with various problems.
Why would I even use ws.close() when I don't plan on reopening a socket?
You should always gracefully close() your web socket to notify the other end that the connection ends. You should also send the close reason 1000 (as the client) or 1001 (as the server) to tell the other end that they should not try to reconnect.
If a server needs to drop the connection but will become available again soon after, it might forcibly close the TCP connection. This will cause the client to immediately detect an abnormal closure, and it might attempt to automatically reconnect after a short time. (There is no method to do this in the ws library, you'd have to call ws._socket.end()).
If you terminate() a working connection, the other end will not notice anything. Only when it tries to send something (e.g. a ping()), after some timeout it will find that the tcp connection was lost and the websocket will become abnormally closed. Don't do that!
You should only ever call terminate() if you have determined that the connection has already been lost - like in the heartbeat example. This will free up resources on your end quickly and immediately fire the respective stream events on the ws object.

Under what circumstances can reconnection actually occur with Server Sent Events?

I am using Server Sent Events for my small chat application, and I am storing a list of sent events on my server, along with passing an id field with SSE message.
Server Sent Events apparently has a concept of automatic reconnection, but I cannot seem to find a single instance where this actually occurs in practice.
For example, if you are on Android and tab out of the application, wait 30 seconds, and then tab back in, then the connection is broken. But the onerror event never occurs, and the readyState stays OPEN. So the only option to handle this situation seems to poll for heartbeats and do a manual reconnection by re-initializing the EventSource object if curr_time - last_heartbeat > heartbeat_interval
Another instance of disconnection is when you for example disable WiFI and then re-enable it. However, when this occurs, Android Chrome just automatically refreshes the page, so reconnection doesn't occur here either (instead it's just refresh which causes a disconnect and then a fresh connection).
So, am I missing something? SSE is touted as being very robust for its automatic reconnection ability, but I cannot find a single case where it actually performs a reconnection. What instances are there where a reconnection can actually occur, such that I can test this behavior?

WebSockets not closing on IE if closing handshake is never made

I've been implementing a WebSocket with JavaScript and I have this one problem:
The endpoint that my web-application is connected to doesn't send back a close control frame when I'm sending it one.
This isn't that bad because browsers close the WebSocket connection after a while.
But a few things to notice are:
Browsers do only allow a specific amount of WebSockets to be connected at the same time.
When refreshing the web-application a new WebSocket is created
This causes the problem on IE:
When refreshing the web-application more than 6 times, a WebSocket connection cannot be made.
It seems like IE doesn't "delete" the WebSockets if they haven't been closed cleanly. And what's odd is that the amount of web sockets never seems to decrease by refreshing or just by waiting.
Only by closing the browser window, or the tab resets the number of WebSockets to 0.
I've done some researching and this is what I've found out:
Browsers do only support a specific amount of WebSockets to be connected at the same time.
IE supports 6 websockets to be connected [link]
Chrome supports 255 websockets to be connected [link].
And socket.onclose() isn't triggered when you do socket.close(), it is called when the endpoint responses with a close message. [link]
IE waits 15 seconds for the endpoint to send the close message [link].
Chrome waits for 60s for the responding message [Sorry, no link for this, found this out by testing].
If no response message is received, the browser closes the WebSocket connection and a TimeoutError should occur.
Please correct me if I'm wrong :)
I've tried to use unbeforeload to disconnect from the endpoint in hope that the browser would close the connection after a while, but with no luck. [link].
It can also be the cause of that IE aren't able to do request inside the unbeforeload function [link].
Question:
Is there any way to reset the number of WebSockets that are
connected in the browser to the endpoint with JavaScript?
Is there a way to disconnect a WebSocket from the endpoint immediately without closing the connection cleanly?
Is the only way to get this to work to inform the ones who host their endpoint make some changes so they do send a closing frame back?
Is there anything I've misunderstood or that I could try to get this to work?
Here is (in my opinion) good documentation about the WebSocket protocols if somebody would like to read more about it [link1] [link2].
UPDATE:
Only by refreshing the web-application on IE the WebSockets don't get destroyed.
If you navigate between pages in the web-application a new WebSocket will be made but the last WebSocket will get destroyed.
If it is just an edge case problem, then using a http fallback might be your only option. I guess you already do this for proxy servers that block socket connection away.
There is just 1 idea to verify (unconfirmed). Unfortunately, don't have access to IE to verify.
Application may open websocket connection in WebWorker/iFrame. During page refresh, "websocket connection scope" will be deleted, and connection is freed
EXPLANATION
This content from the question body:
Only by refreshing the web-application on IE the WebSockets don't get destroyed. If you navigate between pages in the web-application a new WebSocket will be made but the last WebSocket will get destroyed.
Says that Websocket connection is not destroyed ONLY when page refreshes. During normal navigation, everything is OK.
So, if websocket connection is opened within other scope which will be deleted during page reload, then hopefully connection will be destroyed.

Websocket connection drop(improper) detect behind firewall

I'm facing a very strange issue in fm.websync, behind Cyberroam connection.
This library connects to a websockets and onRecieve handler of that channel, recieves messages from server.
On any usual network, the websocket connection (HTTP101 request) remains persistent and I'm able to recieve messages.
Behind the firewall, the javascript code reaches the onSuccess handler of channel subscribe, bud no messages are recieved. On inspecting the chrome browser tab, I observe that the websocket connection request got to completed state (instead of being in pending state forever).
I realize that this is some issue with the firewall, and this question addresses it, but I was wondering if there's any approach to programmatically determine this state. Basically, to switch to http polling if websocket is not running properly.
One solution I could think of is to keep a global flag and set it to true in onRecieve handler. And also init a timeout function(2-3s) before channel subscribe to verify if flag is true, otherwise fallback. I'm looking forward to a neater and time-independent(lag indepentent) approach.

node.js - how to get disconnect event with socket.io? [duplicate]

I have a client/server application using nodejs on the server and socket.io as the connection mechanism. For reasons relevant to my application I want to have only one active connection per browser, and reject all the connections from other tabs that may be opened later on during the session. This works great with WebSockets, but if WebSockets is not supported by the browser and XHR-polling is used instead, the disconnection never happens, so if the user just refreshes the page, this is not interpreted as a reconnection ( I have a delay for reconnection and session restoring), but as a new tab, which ends in the connection being rejected because the old connection made by this same tab is still active.
I'm looking for a way to effectively end the connection from the client whenever a refresh occurs. I've tried binding to the beforeunload and calling socket.disconnect() on the client side, and also sending a message like socket.emit('force-disconnect') and triggering the disconnect from the server with no success. Am I missing something here? Appreciate your help!
I've read this question and couldn't find it useful for my particular case.
Solved the issue, it turns out it was a bug introduced in socket.io 0.9.5. If you have this issue just update BOTH your server and client-side code to socket.io > 0.9.9 and set the socket.io client-side options sync disconnect on unload to true and you're all set.
Options are set this way:
var socket = io.connect('http://yourdomain.com', {'sync disconnect on unload' : true});
You can also get "Error: xhr poll error" if you run out of open file descriptors available. This is likely to happen during a load test.
Check the current open file descriptor size:
ulimit -n
Increase it to a high number:
ulimit -n 1000000

Categories