According to SO accepted answer, the ping timeout must be greater than the ping interval, but according to the examples in official socket.io docs, the timeout is less than the interval. Which one is correct?? Also, what could be the ideal values for both the settings for a shared whiteboard application where the moderator should not be able to edit the canvas when disconnected (internet drops off).
According to the socket.io documentation:
Among those options:
pingTimeout (Number): how many ms without a pong packet to consider the connection closed (60000)
pingInterval (Number): how many ms before sending a new ping packet (25000).
Those two parameters will impact the delay before a client knows the server is not available anymore. For example, if the underlying TCP connection is not closed properly due to a network issue, a client may have to wait up to pingTimeout + pingInterval ms before getting a disconnect event.
This leads me to believe there is no dependency on one value being greater than another. You will likely want to set a higher timeout time to allow for slow network connection to receive a response. The interval time will be the time from a failure to attempt trying again and should be set long enough to allow reconnection, but not so long you are holding the connection.
As for ideal values, this will be application specific. A few things I would consider:
How responsive must your application be?
How long will your application take to respond to a network request?
How large is the data being passed back and forth?
How many concurrent users will you have?
These are just to name a few, for a small local application you would likely be fine with a timeout of 10000 and an interval of 5000, but this is an absolute guess. You will need to consider the previously mentioned bullet points.
Related
Accordingly blogpost from Jake Archibald, from Chrome 88 implements 3 stages of throttling
Accordingly throttling implementation from Chrome 57
There are a number of automatic exemptions from this throttling:
Applications playing audio are considered foreground and aren’t throttled.
Applications with real-time connections (WebSockets and WebRTC), to avoid closing these connections by timeout. The run-timers-once-a-second rule is still applied in these cases.
Second cite imeratively says, that once application has Websocket connection, application exempt from throttling.
The fact is, we use #microsoft/signalr library as top-level api for websocket connections, and this library uses internal ping (not a ping opcodes) messages, wrapped with setTimeout. After 5 minutes of background work, that timer throtlled and stops sending ping messages, thats leads to close event and websocket connection being closed.
I'm asking for more detailed explanation:
Does Chorome 88 enbles throttling for applications, that have real-time connections?
Does timers will be throttled regardless websocket connection appeareance and only websocket instances exempt from throttling?
accordingly this post same issues reported.
Quick explanation is:
As Jake wrote in blogpost about heavy throttling
the browser will check timers in this group once per minute. Similar to before, this means timers will batch together in these minute-by-minute checks.
That is! After 5 minutes tab spent in background, signalr ping algorithm will be throttled to 1 minute, BUT default values for keepAliveIntervalInMilliseconds = 15sec and serverTimeoutInMilliseconds = 30sec, thats twice smaller than heavy throttling timer delaying time and for server side it is count as ping failure that is predicate for lifetime methods invocation and stopping the connection, but first, server side trying to stop connection with disconnect handshake, and physically client is still connected, the result is - CloseEvent with code 1000 and wasClean = true. This behaviour wount produce any errors.
Front-end clients must update version of #microsoft/signalr to >= 5.0.6 to solve this problem. Changes
I use setInterval to get my notification counter every 5 second, I thinks it's bad idea to getting those results. because if you stay on my site for a while you got a billion times of loading XHR loading.And if you use the facebook, you don't get lot of XHR.Here is my web site capture XHR:
My Code in file: notification.php:
function getnotificount(){
$.post('getnotificount.php', function(data) {
$('#notifi_count').html(data);
});
}
setInterval(function(){
getnotificount();
}, 5000);
Your code is ok. It is not 'loading more than 1 billion XHR request', it's starting (and finishing - as we can see) a request every X seconds and there's nothing wrong with that.
However it's not the best way to implement a push notification system. That would be websockets, which is a way for your client to 'listen' to messages from your server. There are frameworks for this, the most popular one (and the one that I recommend) being socket.io.
Your third and most advanced/modern solution would be implementing a service-worker-based notification system but I'm pretty sure that's way too complex and not suitable for you since you can't even understand your problem enough to describe it.
What you are doing is polling, which is making a new request to the server on a regular bases. For a http request, a TCP connection is opened, request / response are exchanged and the TCP connection closed. This is what happens every 5s in your case.
If you want a more light weight solution, have a look at websockets such as socket.io. Only one TCP connection is opened and maintained between the front and the back. This bidirectional conection enables the back to notify the front when something happens.
It is not a bad idea at all. It is called polling and it is used at many places as a means to get regularly data from the server. However, it is not the best, nor the most modern solution to do this. In fact, if your server supports WebSockets, then you should use them. You do not need to use NodeJS to use WebSockets, since WebSocket is a protocol of creating a duplex channel of communication between your server and the client. You should read about WebSockets. Also, you can use push notification, which is inferior to WebSockets in my opinion. And a hacky way is to use a forever frame.
I've read a bit about Server Side Events and it seems to me that the biggest difference between SSE and Ajax Polling is that in latter you're supposed to query server yourself after each response, while with SSE a browser does that for you. Is it correct?
And in terms of server handling, there is almost no difference between SSE and Ajax Polling, with a minor difference of formatting the response in a certain way and including Content-type: text/event-stream header?
As Seabizkit basically said, one method polls the server (as much as it wants), and the other sends messages (when the server decides to send them).
If there was a single update of some data per day, can you see what the difference would be if all clients were checking once per minute, or the server sending the message once to all who have subscribed to the event?
In your question you ask if this is correct: 'the biggest difference between SSE and Ajax Polling is that in latter you're supposed to query server yourself after each response, while with SSE a browser does that for you'. To me this means you've basically asked if the browser is doing the requests for you.
Ajax Polling is asking for data - so you can check to see if it has changed etc. (similar to a web page request) on a timed basis.
An SSE sends a message to all that want to know of the change ONLY when the change has occurred.
Polling is not querying after each response, it is querying as much as you want, when you want (10 times per second if you wish, a 100, a 1,000, whatever you deem fit).
Events occur WHEN something has happened, and subscribers are then notified (hopefully just the once).
Imagine if I wanted to know if my parcel delivery driver will be turning up within the next 30 minutes.
I could call once a minute and ask - I could do this all day long if I wanted, or the driver can just call me and let me know they are 30 minutes away.
You stated in your comment to Seabizkit that client side initiates communication. No it doesn't. It adds an event handler for an event that is available on the server. The communication after that is the server sending a message to the client, be it 5 seconds later, 5 minutes later, or 50 times per second - the client doesn't request again, it has subscribed to the event and will be notified every time it fires.
Please bear in mind that this is a general explanation - not a technical one, because your question was fairly open in asking what the difference is between the two.
In the context of browsers...
The difference is: One Polls and the other responds to an Event(*).
Polling; is started at the browser end.
Make a request... receive response...do something. (usually change the UI)
Polling is expensive (relative to what you are doing!).
Polling is far easier to setup compared to handling server change on the browser.
Server side Events/Changes; is started at the server.
How to notify the browser?
Browsers out of the box have no way to respond to service side changes.
basically the browser has no idea that anything happened on the server.
You are left to handle this on your own.
Luckily library such as SignalR http://signalr.net/
Can be used simplify this a lot for you. But the complexity is still quite high compared to that of simple page with polling.
It requires you to handle socket connections between "clients".
(*) = pinch of salt, technically not worded correctly.
if this doesn't answer your question or you want more info ask.
Concerning JavaScript & PHP WebSocket TCP packet clumping, with example below.
For some reason, when sending packets quickly on my VPS, or accessing my localhost through a domain pointing at my IP address, multiple packets will clump together. I am trying to stream, for this example, 20 (#100byte) packets per second. On the servers end, they ARE indeed being sent out at a steady rate, exactly every 50ms, making 20 per second.
However, when they get to the client, the client only processes new messages about every 1/4th of a second. Causing new packets only to be received at a rate of 4 per second or so...
What is causing this clumping of packets together? This problem does not occur when everything through localhost. What's weirder is that it streams smoothly on iPhone's iOS Mobile Safari, with no problem at all. BUT, it doesn't work at all on PC Safari, (because I haven't set this up to work correctly with the old Hixie-76 WebSocket format, I'm assuming Mobile Safari is already using the newer RFC 6455 or newer JavaScript compiler) I have tried multiple hosting companies, with the exact same results each time.
See the example below, hosted on InMotion's VPS:
http://www.hovel.me/script/serverControl.php
(Click [Connect] on the left, then [View Game] on the right).
The current packet received will jump about 5 every time, as each 5 packets are received at once, every 1/4th of a second. However, I've seen examples that can send a constant, quick stream of packets.
What causes this clumping together / packets to wait for each other?
EDIT: This HAS to be something to do with Nagle's algorithm, which collects & sends small packets together? I'll work towards trying to bypass this in PHP.
Even with this TCP_NODELAY set in PHP, the problem still stands. Why it works on an iPhone but not a PC is still throwing me off...
EDIT: Setting TCPNoDelay and TcpAckFrequency to 1 in the registry fixes this, but I can't expect every user to do that. There must be a client-side, bread & butter JavaScript way.
How can I have functionality replicating node.js' "socket.setNoDelay(true)", without using node.js?
In the end, the client not recognizing the Nagle's algorithm being disabled, along with it's acknowledge frequency still being set at around 200ms, was causing the intermediate network to hold the following packets in a buffer. Manually sending out an acknowledgement message to the server, every single time the client receives a message, will cause the network to immediately "wake-up" and continue to process the next packets, as opposed to holding them in a buffer.
For Example:
conn = new WebSocket(url);
conn.onmessage = function(evt){
Server.send('ACKNOWLEDGE BYTES'); // Send ACK to server immediately
dispatch('message', evt.data); //Do regular event procedures
};
This temporary solution works, however this will nearly double bandwidth usage, among other network problems. Until I can get the WebSocket on the clients end to correctly not 'standby' for server ack, and the network to push through messages immediately, this gets the packets through quicker without the buffer corking problem.
That's TCP. It is helping you by economizing on IP packets. Part of it is due to the Nagle algorithm but part of it may also be caused by the intermediate network.
We're building a latency-sensitive web application that uses websockets (or a Flash fallback) for sending messages to the server. While there is an excellent tool called Yahoo Boomerang for measuring bandwidth/latency for web apps, the latency value produced by Boomerang also includes the time necessary to establish HTTP connection, which is not what I need, since the websockets connection is already established and we actually only need to measure the ping time. Is there any way to solve it?
Second, Boomerang appears to fire only once when the page is loaded and doesn't seem to rerun the tests later even if commanded to. Is it possible to force it to run connection tests e.g. every 60 seconds?
Seems pretty trivial to me.
Send PING to the server. Time is t1.
Read PONG response. Time is t2 now.
ping time = t2 - t1
Repeat every once in a while (and optionally report to the stats server).
Obviously, your server would have to know to send PONG in response to a PING command.