Now a days real time updates are common in most popular sites which have heavy usages.
StackExchange
Facebook
Twitter
I'm wondering how do these "real time updates" work? I'm just looking for a general bird's view perspective. I suspect that the JS can't be calling the server every X seconds for an update and then appending that to the <ul>. Is a notification sent from the server went to pull more content?
Would be great if there is a simple how to article that explains this with a demo?
Stack Overflow is using Web Sockets for real time updates. If you take a look in the source code (2012 source code), you would see:
StackExchange.ready(function () {
StackExchange.realtime.init('ws://sockets.ny.stackexchange.com');
StackExchange.realtime.subscribeToInboxNotifications();
StackExchange.realtime.subscribeToReputationNotifications('1');
});
But note that some Opera versions do not support WebSocket. (not until Opera 10.70)
However Facebook does not seem to be using Web Sockets, and I think they are just using simple XHR with a technique called long polling, which the server holds on to the connection until there is new information, and then respond to the request. If you open up the developer tools you can see that there is always one request which has a status of pending.
It is indeed, sending a request every ~60 seconds.
It seems that Twitter also uses simple XHR (1 minute intervals) for their "real time updates".
Facebook uses long polling/Comet. So it makes a connection and waits for a response, if no response, then it times out and tries again. The timeout is around 40 secs. That's how it does most of the instant updating. However they use a combination of techniques. More on long polling here.
http://en.wikipedia.org/wiki/Comet_(programming)
Related
I use setInterval to get my notification counter every 5 second, I thinks it's bad idea to getting those results. because if you stay on my site for a while you got a billion times of loading XHR loading.And if you use the facebook, you don't get lot of XHR.Here is my web site capture XHR:
My Code in file: notification.php:
function getnotificount(){
$.post('getnotificount.php', function(data) {
$('#notifi_count').html(data);
});
}
setInterval(function(){
getnotificount();
}, 5000);
Your code is ok. It is not 'loading more than 1 billion XHR request', it's starting (and finishing - as we can see) a request every X seconds and there's nothing wrong with that.
However it's not the best way to implement a push notification system. That would be websockets, which is a way for your client to 'listen' to messages from your server. There are frameworks for this, the most popular one (and the one that I recommend) being socket.io.
Your third and most advanced/modern solution would be implementing a service-worker-based notification system but I'm pretty sure that's way too complex and not suitable for you since you can't even understand your problem enough to describe it.
What you are doing is polling, which is making a new request to the server on a regular bases. For a http request, a TCP connection is opened, request / response are exchanged and the TCP connection closed. This is what happens every 5s in your case.
If you want a more light weight solution, have a look at websockets such as socket.io. Only one TCP connection is opened and maintained between the front and the back. This bidirectional conection enables the back to notify the front when something happens.
It is not a bad idea at all. It is called polling and it is used at many places as a means to get regularly data from the server. However, it is not the best, nor the most modern solution to do this. In fact, if your server supports WebSockets, then you should use them. You do not need to use NodeJS to use WebSockets, since WebSocket is a protocol of creating a duplex channel of communication between your server and the client. You should read about WebSockets. Also, you can use push notification, which is inferior to WebSockets in my opinion. And a hacky way is to use a forever frame.
I am creating a Chrome extension that checks gmail inbox.
I'm using the xml feed url to fetch it:
https://mail.google.com/mail/u/0/feed/atom
For updating, I'm using chrome.alarms API that sends GET request every 10 seconds. Is that too much? If not, could I change it to 1 second too? How much load does their server have to handle in order to send me the feed's information?
Using XHR every 10 seconds looks like not bad idea, but every second might be too much. Each XHR request creates new connection, sending lots of data to server, and lots of data will receive. If you need a real-time app, please consider to using Websocket or Socket.io instead. They are very lightweight, fast and easy to use.
Notification APIs (gmail seems to have them)
You can get updates with a lot less latency if you use a different technique such as "long polling" or "web sockets". This is very useful for things where you want no lag, like a real-time chat, a web-based video game, or a time-sensitive process like an auction or ticket/order queue. It might be a bit less important for something less real-time, like email (see the last paragraph in this answer).
Gmail seems to have an API that is explicitly designed for fast notifications.
HTTP polling
Most web servers can stand up to being called once a second without killing their site. If they can't, then they have some serious security problems (Denial of Service attacks, or worse if their site is buggy enough to suffer data loss when overloaded).
Google has big servers, and protection, so you don't really need to worry about what they can handle, as long as they don't block you. Google may rate limit calls to their gmail API, and you may end up getting a user throttled if you call their API more then they prefer. Consult their documentation to find out what their rate limiting policies are.
To more generically answer you question, normal HTTP isn't really optimized for frequent polling for refreshed data. You can make a decent number of requests (even upwards of one a second or more). You probably won't kill the computer or browser or even make them run slowly, as long as the request and response payload data is small enough, and the processing/dom changes you do with the response are minimal when the data comes back unchanged.
Assuming you don't violate per-site rate limits, and have small data payloads in the request you are making, then the biggest problem is that you might still be wasting a lot of bandwidth. For people who have to pay by the minute/megabyte, this is a problem. This is much more frequent in Europe than in the United States (although it is also frequent on cellular devices).
Consider if you really need email to be checked every second. Would every ten seconds be fine? Maybe every minute? Maybe your app could do a refresh when you send mail, but take longer to check for new mail when it is idle? Think about what use cases you are solving for before assuming everything always has to be constantly updated. What would waiting a few extra seconds break? If the answer is nothing, then you can safely slow down your updates. If there are use cases that would be broken or annoying, then figure out why it is important. See if there are even better ways to address that use case, or if updating frequently is actually the right answer. Sometimes it is!
I've read a bit about Server Side Events and it seems to me that the biggest difference between SSE and Ajax Polling is that in latter you're supposed to query server yourself after each response, while with SSE a browser does that for you. Is it correct?
And in terms of server handling, there is almost no difference between SSE and Ajax Polling, with a minor difference of formatting the response in a certain way and including Content-type: text/event-stream header?
As Seabizkit basically said, one method polls the server (as much as it wants), and the other sends messages (when the server decides to send them).
If there was a single update of some data per day, can you see what the difference would be if all clients were checking once per minute, or the server sending the message once to all who have subscribed to the event?
In your question you ask if this is correct: 'the biggest difference between SSE and Ajax Polling is that in latter you're supposed to query server yourself after each response, while with SSE a browser does that for you'. To me this means you've basically asked if the browser is doing the requests for you.
Ajax Polling is asking for data - so you can check to see if it has changed etc. (similar to a web page request) on a timed basis.
An SSE sends a message to all that want to know of the change ONLY when the change has occurred.
Polling is not querying after each response, it is querying as much as you want, when you want (10 times per second if you wish, a 100, a 1,000, whatever you deem fit).
Events occur WHEN something has happened, and subscribers are then notified (hopefully just the once).
Imagine if I wanted to know if my parcel delivery driver will be turning up within the next 30 minutes.
I could call once a minute and ask - I could do this all day long if I wanted, or the driver can just call me and let me know they are 30 minutes away.
You stated in your comment to Seabizkit that client side initiates communication. No it doesn't. It adds an event handler for an event that is available on the server. The communication after that is the server sending a message to the client, be it 5 seconds later, 5 minutes later, or 50 times per second - the client doesn't request again, it has subscribed to the event and will be notified every time it fires.
Please bear in mind that this is a general explanation - not a technical one, because your question was fairly open in asking what the difference is between the two.
In the context of browsers...
The difference is: One Polls and the other responds to an Event(*).
Polling; is started at the browser end.
Make a request... receive response...do something. (usually change the UI)
Polling is expensive (relative to what you are doing!).
Polling is far easier to setup compared to handling server change on the browser.
Server side Events/Changes; is started at the server.
How to notify the browser?
Browsers out of the box have no way to respond to service side changes.
basically the browser has no idea that anything happened on the server.
You are left to handle this on your own.
Luckily library such as SignalR http://signalr.net/
Can be used simplify this a lot for you. But the complexity is still quite high compared to that of simple page with polling.
It requires you to handle socket connections between "clients".
(*) = pinch of salt, technically not worded correctly.
if this doesn't answer your question or you want more info ask.
I'm already tossing around a solution but as I haven't done something like this before I wanted to check what SO thought before implementation.
Basically I need to modify an existing web based application that has approximately 20 users to add push notifications. It is important that the users get the notifications at the same time (PC-A shouldn't get an alert 20 seconds before PC-B). Currently the system works off of AJAX requests, sending to the server every 20 seconds and requesting any updates and completely rebuilding the table of data each time (even if data hasn't changed). This seems really sloppy so there's two methods I've come up with.
Don't break the connection from server-client. This idea I'm tossing around involves keeping the connection between server and client active the entire time. Bandwidth isn't really an issue with any solution as this is in an internal network for only approximately 20 people. With this solution the server could push Javascript to the client whenever there's an update and modify the table of data accordingly. Again, it's very important that every connected PC receives the updates as close to the same time as possible. The main drawback to this is my experience, I've never done it before so I'm not sure how well it'd work or if it's just generally a bad idea.
Continue with the AJAX request, but only respond in intervals. A second solution I've thought of would be to allow the clients to make AJAX requests as per usual (currently every 20 seconds) but have the server only respond in 30 second intervals (eg 2:00:00 and 2:00:30 regardless of how many AJAX requests it recieves in that span of time). This would require adjusting the timeout for the AJAX request to prevent the request timing out, but it sounds okay in theory, at least to me.
This is for an internal network only, so bandwidth isn't the primary concern, more so that the notification is received as close to each other as possible. I'm open to other ideas, those are just the two that I have thought of so far.
Edit
Primarily looking for pros and cons of each approach. DashK has another interesting approach but I'm wondering if anyone has experience with any of these methods and can attest to the strengths and weaknesses of each approach, or possibly another method.
If I understand well your needs I think you should take a look to Comet
Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins.
The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time.
How about using an XMPP server to solve the problem?
Originally designed to be an Instant Messaging platform, XMPP is a messaging protocol that enables users in the system to exchange messages. (There's more to this - But let's keep it simple.)
Let's simplify the scenario a little bit. Imagine the following:
You're a system admin. When the system
has a problem, you need to let all the
employees, about 20 of them, know that
the system is down.
In the old days, every employee will
ask you, "Is the system up?" every
hour or so, and you'll response
passively. While this works, you are
overloaded - Not by fixing system
outage, but by 20 people asking for
system status every hour.
Now, AIM is invented! Since every
employee has access to AIM, you
thought, "Hey, how about having every
single one of them join a 'System
Status' chat room, and I'll just send
a message to the room when the system
is down (or is back)?" By doing so,
employees who are interested in
knowing system status will simply join
the 'System Status' room, and will be
notified of system status update.
Back to the problem we're trying to solve...
System admin = "System" who wants to notify the web app users.
Employees = Web app users who wants to receive notification.
System Status chat room = Still, system Status chat room
When web app user signs on to your web app, make the page automatically logs them onto the XMPP server, and join the system status chat room.
When system wants to notify the user, write code to logon to the XMPP server, join the chat room, and broadcast a message to the room.
By using XMPP, you don't have to worry about:
Setting up "Lasting connection" - Some open source XMPP server, eJabberd/OpenFire, has built-in support for BOSH, XMPP's implementation of the Comet model.
How the message is delivered
You however will need the following:
Find a Javascript library that can help you to logon to an XMPP server. (Just Google. There're a lot.)
Find a XMPP library for the server-side code. (XMPP library exists for both Java & C#. But I'm not sure what system you're using behind the scene.)
Manually provision each user on the XMPP server (Seems like you only have 20 people. That should be easy - However, if the group grows bigger, you may want to perform auto-provisioning - Which is achievable through client-side Javascript XMPP library.)
As far as long-lasting AJAX calls, this implementation is limited by the at-most-2-connection-to-the-same-domain issue. If you used up one connection for this XMPP call, you only have 1 more connection to perform other AJAX calls in the web-app. Depending on how complex your webapp is, this may or may not be desirable, since if 2 AJAX calls have already been made, any subsequent AJAX call will have to wait until one of the AJAX pipeline freed up, which may cause "slowness" on your app.
You can fix this by converting all AJAX calls into XMPP messages, and have a bot-like user on the server to listen to those messages, and response to it by, say, sending back HTML snippets/JSON objects with the data. This however might be too much for what you're trying to achieve.
Ahh. Hope this makes sense... or not. :p
See http://ajaxpatterns.org/HTTP_Streaming
It allows You to push data from the server when server wants it. Not just after the query.
You could use this technique without making large changes to the current application, and synchronize output by the time on the server.
In addition to the other two great options above, you could look at Web Workers if you know they have latest Chrome, Safari, FF, or Opera for a browser.
A Worker has the added benefit of not operating in the same thread as the rest of the page, so performance will be better. The downside is that, for security purposes, you can only send string data between the two scripts and the worker does not have window or document context. However, JSON can be represented as a string, so there's really no limit to the data.
Workers can receive data multiple times and asynchronously. You set the onmessage handler to act each time it receives something.
If you can ask every user to use a specific browser (Latest Safari or Chrome), you can try WebSockets too.
I'm just wondering if there is a way to have a server push information to a JavaScript function. Essentially I have a Dashboard-type page that has a javaScript function to get updates from the server and update the dashboard.
I would like my server to be able to "ping" the JS.
I don't even know how that could be possible (I'm guessing Twitter and Facebook use polling?), but I'd thought I ask.
I heard of Comet, but I don't know if that works with a plain standard IIS 7 installation? (It's a SharePoint 2010 site if that matters in any way) If I understand it correctly, Comet is essentially a constantly open connection, so it seems like it's actually the opposite of what I want (reducing # of requests and therefore load)
If you're looking for a comet server for IIS, check out WebSync; it's exactly that :)
Truly initiating a connection from the server is not possible using HTTP. Comet isn't really a single technique, but a set of different workarounds (many of which are described at the article you linked).
For information on Comet techniques with IIS, see the prior question, Comet Programming in IIS. One of the programs discussed there is WebSync.
A Comet style workaround is the most common way to get this functionality. The connection is not constantly open, but rather throttled to make calls every x seconds, then try again upon timeout. The timeout essentially means that the server didn't have anything to give to the client in the duration of the poll. You'll see that the Etherpad code used this same approach, which has been integrated into other Google products now like Google Docs and Wave.
As Samuel Neff says, "You're going to need an open connect to "push" data from the server to the client."
You can use a service like pubnub to open persistent connections from the client and support fallbacks for older browsers.
I made a small demo to show you how the front-end of this application may work. The demo shows PubNub latency over time. The source is available here.
The browser subscribes to a channel and fires a callback when a message is received.
pubnub.subscribe({
channel: 'my_channel',
message: function(m){console.log(m)}
});
In the demo the client also publishes messages. In your case you would include the PubNub IIS library.
pubnub.Subscribe<string>(channel="mychannel", DisplaySubscribeReturnMessage, DisplaySubscribeConnectStatusMessage, DisplayErrorMessage);
// NOTE: DisplaySubscribeReturnMessage, DisplaySubscribeConnectStatusMessage and DisplayErrorMessage are callback methods
You're going to need an open connect to "push" data from the server to the client. So even if you went the route of using a plugin like Flash to open a socket connection which supports two-way communications, you have an open socket connection.
Your statement "reducing # of requests and therefore load" really is problematic. You're equating number of requests with load and that is not accurate. With Comet the majority of requests are waiting on data. Therefore you can have a very high number of requests, but really a very low load on the server--it's hardly using resources besides a waiting thread from the worker thread pool.
Use Comet. Works great, is simple to implement, and does exactly what you need.
You have to do it the other way around, by having the client "pinging" the server with JS.
You can do something like:
function pollServer()
{
// Get some parameter
var param = .......
AJAXCall("page.php?param="+param, onReturn);
}
function onReturn(response)
{
// do something with response
setTimeout("pollServer()", 5000);
}
pollServer();
AJAXCall being the function you use to do an AJAX call and that calls onReturn when it gets a response.
Once it gets a response it waits in this case 5 seconds and polls the server again