In our application we have SSE connection with living time 5 minutes, after 5 minutes server closes the connection and client reconnect automatically.
But here the problem: while client reconnecting, there might some event happened on backend and it will not be passed to SSE connection, because itβs not established yet.
So there are some time slots 1-2sec when we may loose events.
How we can handle this case ? What is your opinion ?
From my vision we only have one choice: after every SSE reconnect do additional GET requests on server to refresh data.
This is exactly what the Last-Event-ID HTTP header in the SSE protocol is designed for.
On the server side you should look for that header when you get a new connection. If it is set, stream the missing data gap to them, immediately. And you should set the id header for each message you push out, to some unique identifier.
On the client side, for your particular use case, you do not need to do anything: when SSE reconnect runs it sends that header automatically, using the id of the last data it had received.
In chapter 5 of my book Data Push Apps with HTML5 SSE, I argue you should also include that same unique id, explcitly, in the JSON data packet you push out, and you should support the Last-Event-ID being given as a POST/GET argument as well. This gives you the flexibility to work with the long-polling alternative approaches to SSE, and also means it can work if the reconnect came from the client-side rather than the server-side. (The former would be for supporting older browsers, though that matter less and less as IE dies out; the latter would be needed if you implement your own keep-alive mechanism.)
You can queue of events in the server and deque the events when the client is active.
Regardless of the client's connection status, just add all the events to queue.
When the client is connected, deque all the events from the queue.
Instead of sending the message directly to clients the application
sends it to the broker. Then the broker sends the message to all
subscribers (which may include the original sender) and they send it
to the clients.
Refer https://www.tpeczek.com/2017/09/server-sent-events-or-websockets.html
Related
Client emits newdataFromClient to sever
server starts processing new data:
socket.on('newdataFromClient', async(newdataFromClient) => {
let result = await doSomething(newdataFromClient)
socket.emit('response', result)
})
while server is not done processing, client sends more data
Server will eventually emit 2 results and send them back to the client. One for each newdataFromClient it received.
Will the results be sent back in order or is whichever one finishes faster the one that will be sent back first ?
I'm running a basic node server on a Macbook Pro. If the server starts getting multiple newdataFromClient one after another, will it start handling each on separate threads and when it runs out of threads it will start stacking them up in order?
I assume that my server will crash anyway if it can't handle too many calls but that's a separate issue.
Here I'm only interested in the order of the server responses.
socket.io events will arrive in the order they were sent from the server. The underlying transport here is TCP which keeps packets in order.
Now, if the server itself is handling the individual arriving requests and in its processing to send a result, it has asynchronous operations, there may be no guarantee on the server for what order it will send the responses. In fact, it is likely that the completion order on the server is unpredictable.
Will the results be sent back in order or is whichever one finishes faster the one that will be sent back first ?
Whichever one finishes first on the server and sends a message back is the one that will be received first in the client. These are independent messages that simply go on the transport en-route to the client whenever they are emitted at the server. And the TCP transport which underlies the webSocket protocol which underlies the socket.io engine will keep these packets in the order they were originally sent from the server.
Ack feature in socket.io
socket.io does have a means of getting a specific "ack" back from a specific message. If you look at the socket.emit() doc, you will see that one of the optional arguments is an ack callback. That can be used (in conjunction with the server sending an ack response) to get a specific response to this specific message. For the details on how to implement, see both the client and server-side doc for that "ack" feature.
Absent using that built-in feature, you would have to build your own messageID based system so you can match a given response coming back from the server to the original request (that's what the "ack" feature does internally) because socket.io is not natively a request/response protocol (it's a message protocol).
I have an issue - I should update information for user as soon as possible, but i don't know exact time when it'll happen.
I use setInterval function that checks differences between current state and the state before checking. If there are any differences then I send an AJAX request and update info. Is it bad? I can't (or don't know how to) listen any events in that case.
And what about interval time? All users (~300 at the same time) are from local network (ping 15-20 ms). I have to refresh information immediately. Should I better use 50ms or 500ms?
If the question is not very clear just ask - I'll try to say it in other words.
Thanks in advance
Solution: Websocket
Websockets allow client applications to respond to messages initiated from the server (compare this with HTTP where the client needs to first ask the server for data via a request). A good solution would be to utilize a websocket library or framework. On the client you'll need to create a websocket connection with the server, and on the server you'll need to alert any open websockets whenever an update occurs.
The issue with interval
It doesn't scale, you could set the interval to 4000 miliseconds and still once you hit 1000 users...you are going to be slamming your server with 10000 requests and responses a minute...This will use tons of data and use processing to return nothing. Websockets will only send data to the client agent only when the event you want to send actually occurs.
Backend: PHP
Frameworks
Ratchet
Ratchet SourceCode
phpwebsocket
PHP-Websockets-Server
Simply implement one of the above frameworks as a websocket connection then you will register as a client to this endpoint and it will send data on whatever event you define.
How Websockets are implemented?
What is the algorithm behind this new tech (in comparison to Long-Polling)?
How can they be better than Long-Polling in term of performance?
I am asking these questions because here we have a sample code of Jetty websocket implementation (server-side).
If we wait long enough, a timeout will occur, resulting in the
following message on the client.
And that is definately the problem I'm facing when using Long-polling. It stops the process to prevent server overload, doesn't it ?
How Websockets are implemented?
webSockets are implemented as follows:
Client makes HTTP request to server with "upgrade" header on the request
If server agrees to the upgrade, then client and server exchange some security credentials and the protocol on the existing TCP socket is switched from HTTP to webSocket.
There is now a lasting open TCP socket connecting client and server.
Either side can send data on this open socket at any time.
All data must be sent in a very specific webSocket packet format.
Because the socket is kept open as long as both sides agree, this gives the server a channel to "push" information to the client whenever there is something new to send. This is generally much more efficient than using client-driven Ajax calls where the client has to regularly poll for new information. And, if the client needs to send lots of messages to the server (perhaps something like a mnulti-player game), then using an already open socket to send a quick message to the server is also more efficient than an Ajax call.
Because of the way webSockets are initiated (starting with an HTTP request and then repurposing that socket), they are 100% compatible with existing web infrastructure and can even run on the same port as your existing web requests (e.g. port 80 or 443). This makes cross-origin security simpler and keeps anyone on either client or server side infrastructure from having to modify any infrastructure to support webSocket connections.
What is the algorithm behind this new tech (in comparison to
Long-Polling)?
There's a very good summary of how the webSocket connection algorithm and webSocket data format works here in this article: Writing WebSocket Servers.
How can they be better than Long-Polling in term of performance?
By its very nature, long-polling is a bit of a hack. It was invented because there was no better alternative for server-initiated data sent to the client. Here are the steps:
The client makes an http request for new data from the client.
If the server has some new data, it returns that data immediately and then the client makes another http request asking for more data. If the server doesn't have new data, then it just hangs onto the connection for awhile without providing a response, leaving the request pending (the socket is open, the client is waiting for a response).
If, at any time while the request is still pending, the server gets some data, then it forms that data into a response and returns a response for the pending request.
If no data comes in for awhile, then eventually the request will timeout. At that point, the client will realize that no new data was returned and it will start a new request.
Rinse, lather, repeat. Each piece of data returned or each timeout of a pending request is then followed by another ajax request from the client.
So, while a webSocket uses one long-lived socket over which either client or server can send data to the other, the long-polling consists of the client asking the server "do you have any more data for me?" over and over and over, each with a new http request.
Long polling works when done right, it's just not as efficient on the server infrastructure, bandwidth usage, mobile battery life, etc...
What I want is explanation about this: the fact Websockets keep an
open connection between C/S isn't quite the same to Long Polling wait
process? In other words, why Websockets don't overload the server?
Maintaining an open webSocket connection between client and server is a very inexpensive thing for the server to do (it's just a TCP socket). An inactive, but open TCP socket takes no server CPU and only a very small amount of memory to keep track of the socket. Properly configured servers can hold hundreds of thousands of open sockets at a time.
On the other hand a client doing long-polling, even one for which there is no new information to be sent to it, will have to regularly re-establish its connection. Each time it re-establishes a new connection, there's a TCP socket teardown and new connection and then an incoming HTTP request to handle.
Here are some useful references on the topic of scaling:
600k concurrent websocket connections on AWS using Node.js
Node.js w/1M concurrent connections!
HTML5 WebSocket: A Quantum Leap in Scalability for the Web
Do HTML WebSockets maintain an open connection for each client? Does this scale?
Very good explanation about web sockets, long polling and other approaches:
In what situations would AJAX long/short polling be preferred over HTML5 WebSockets?
Long poll - request β wait β response. Creates connection to server like AJAX does, but keep-alive connection open for some time (not long though), during connection open client can receive data from server. Client have to reconnect periodically after connection is closed due to timeouts or data eof. On server side it is still treated like HTTP request same as AJAX, except the answer on request will happen now or some time in the future defined by application logic. Supported in all major browsers.
WebSockets - client β server. Create TCP connection to server, and keep it as long as needed. Server or client can easily close it. Client goes through HTTP compatible handshake process, if it succeeds, then server and client can exchange data both directions at any time. It is very efficient if application requires frequent data exchange in both ways. WebSockets do have data framing that includes masking for each message sent from client to server so data is simply encrypted. support chart (very good)
Overall, sockets have much better performance than long polling and you should use them instead of long polling.
How twitter page queries/receives notifications, information about new tweets?
I'd like to implement something like this mechanism for my html+js client->webservice
I don't know what Twitter uses exactly but there are few techniques to handle server notifications.
You can use long-polling (your client issues the same ajax request every few seconds to get new information):
http://techoctave.com/c7/posts/60-simple-long-polling-example-with-javascript-and-jquery/
Or there is the "new" standard called Websocket. A good start to how to write a websocket client is this mozilla tutorial.
There are multiple ways to implement real-time notifications:
HTTP Long Polling : The client initiates a request. The server checks if it has any new notifications. Irrespective of whether or not it has new notifications appropriate response is send and connection is closed. After time X client initiates another request (+ Very easy to implement - notifications are not real time. They depend on X since data retrieval is client initiated. As X decreases overhead on server increases )
HTTP Streaming: This is very similar to HTTP Long Polling however the connection is not closed. The server sends chunked response. So as soon as server receives new notification that it wants to push it can simply write to the socket. ( + lower latency than long polling and almost real time behaviour / overhead of closing connection and re opening reduced - memory usage client side keeps on piling up / ugly hacks etc )
WebSocket: TCP based protocol provides true two way communication. The server can push data to client any time. ( + ve: true real time - some older browsers dont support it ). Read more about it WebSocket.org | About WebSocket
In my ASP.NET MVC app, the user clicks a button on the UI to make a phone call. An ajax request goes to the MVC app, which calls a phone dialer -- a method in library that calls an external component to make a call.
If a dialed call is terminated by the recipient of the call, the phone dialer component raises an event by calling an event handler in its own class.
I want to propagate that event to the client side so that it may update its UI.
An Option I Can't Use
I've looked at JavaScript Server-sent events. However, they are different from my situation in the way that in a JavaScript Server-sent event, here's what happens:
1) The client initiates a connection on a new socket to the server. The key difference being, the client initiates the connection.
2) The connection is held live and active until the server or the client want to terminate it.
3) The server has to be alive all throughout the time from the time the connection is made until the client or the server want to terminate the connection and no longer exchange notifications. This means that a new socket connection and consequently a new worker thread to service the notification exchange is used per client.
If I use server-sent events, I will have to make a server that stays alive. That means I will have to have a new action on a controller and a corresponding view that gets called at the very beginning and stays alive until the notification about the call hang-up is received.
This can not only be expensive, it is also counter-intuitive to my design as I do not want to be redirected to a new View just to listen to events.
Anyone have any other alternative?
You have to either use a WebSocket or Long polling. These both require you to set up a connection from the client to the server, additional to the normal HTTP cycle. And what else would you expect? When the page is sent, the communication between client and server is done. The HTTP cycle is over, no more data can float through. The new connection needs to originate from the client because the client does not allow arbitrary incoming connections.
I do not think there are other alternatives in normal case.
SignalR, etc, all require connection to be alive or periodically restarted by client. I am not aware of anything that allows server to initiate connection with a browser (it does not even seem technically possible due to proxies/firewalls etc).