I have a simple long poll request on my website to check if there are any new notifications available for a user. As far as the request goes, everything seems to work flawlessly; the request is only fulfilled once the database is updated (and a notification has been created for that specific user), and a new request is sent out straight after.
The Problem
What I have noticed is that when the request is waiting for a response from the database (as long polls should), all other requests to the server will also hang with it - whether it be media files, AJAX requests or even new pages loading. This means that all requests to the server will hang until I close my browser and reopen it.
What is even stranger is that if I visit another one of my localhost sites (my long poll is on a MAMP virtualhost site, www.example.com), their is no problem and I can still use them as if nothing has happened - despite the fact they're technically on the same server.
My Code
This is what I have on my client side (longpoll.js):
window._Notification = {
listen: function(){
/* this is not jQuery's Ajax, if you're going to comment any suggestions,
* please ensure you comment based on regular XMLHttpRequest's and avoid
* using any suggestions that use jQuery */
xhr({
url: "check_notifs.php",
dataType: "json",
success: function(res){
/* this will log the correct response as soon as the server is
* updated */
console.log(res);
_Notification.listen();
}
});
}, init: function(){
this.listen();
}
}
/* after page load */
_Notification.init();
And this is what I have on my server side (check_notifs.php):
header("Content-type: application/json;charset=utf-8", false);
if(/* user is logged in */){
$_CHECKED = $user->get("last_checked");
/* update the last time they checked their notifications */
$_TIMESTAMP = time();
$user->update("last_checked", $_TIMESTAMP);
/* give the server a temporary break */
sleep(1);
/* here I am endlessly looping until the conditions are met, sleeping every
* iteration to reduce server stress */
$_PDO = new PDO('...', '...', '...');
while(true){
$query = $_PDO->prepare("SELECT COUNT(*) as total FROM table WHERE timestamp > :unix");
if($query->execute([":unix" => $_CHECKED])){
if($query->rowCount()){
/* check if the database has updated and if it has, break out of
* the while loop */
$total = $query->fetchAll(PDO::FETCH_OBJ)[0]->total;
if($total > 0){
echo json_encode(["total" => $total]);
break;
}
/* if the database hasn't updated, sleep the script for one second,
* then check if it has updated again */
sleep(1);
continue;
}
}
}
}
/* for good measure */
exit;
I have read about NodeJS and various other frameworks that are suggested for long-polling, but unfortunately they're currently out of reach for me and I'm forced to use PHP. I have also had a look around to see if anything in the Apache configuration could solve my problem, but I only came across How do you increase the max number of concurrent connections in Apache?, and what's mentioned doesn't seem like it would be the problem considering I can still use my other localhost website on the same server.
Really confused as to how I can solve this issue, so all help is appreciated, Cheers.
What is actually happening is that php is waiting for this script to end (locked) to serve the next requests to the same file.
As you can read here:
there is some lock somewhere -- which can happen, for instance, if the two requests come from the same client, and you are using
file-based sessions in PHP : while a script is being executed, the
session is "locked", which means the server/client will have to wait
until the first request is finished (and the file unlocked) to be able
to use the file to open the session for the second user.
the requests come from the same client AND the same browser; most browsers will queue the requests in this case, even when there is
nothing server-side producing this behaviour.
there are more than MaxClients currently active processes -- see the quote from Apache's manual just before.
There's actually some kind of lock somewhere. You need to check what lock is happening. Maybe $_PDO is having the lock and you must close it before the sleep(1) to keep it unlocked until you make the next request.
You can try to raise your MaxClients and/or apply this answer
Perform session_write_close() (or corresponding function in cakephp) to close the session in the begin of the ajax endpoint.
Related
A web client should only expose some features when a backend API is up and running. Therefor, I'm looking for a clean way to monitor the availability of this backend.
As a quick fix, I made a timer-based function that performs a basic GET on the API root. It's not very clean, generates lots of traffic and pollutes the javascript console with errors (in case of server down).
How should one deal with such situation?
You can trigger something in the lines of this when you need it:
function checkServerStatus()
{
setServerStatus("unknown");
var img = document.body.appendChild(document.createElement("img"));
img.onload = function()
{
setServerStatus("online");
};
img.onerror = function()
{
setServerStatus("offline");
};
img.src = "http://myserver.com/ping.gif";
}
Make ping.gif small (1 pixel) to make it as fast as possible.
Ofc you can do it more smoothly by accessing the API that returns true and keeps a really small response time, but that requires you to do some coding in back-end this simply needs you to place a 1-pixel gif image in a correct directory on a server. You can use any picture already present on the server, but expect more traffic and time as image grows larger.
Now put this in some function that calls it with delay, or simply call this when you need to check status, it's up to you.
If you need a server to send to your app a notification when it's down then you need to implement this:
https://en.wikipedia.org/wiki/Push_technology
Ideally, you would have high-reliability server that has fast response rate and is really reliable to be pinging the desired server in some interval to determine whether it up then use the push to get that information to your app. This way that 3rd server would only send you a push if a status of your app server has changed. Ideally, this server's request has a high priority on your app server queue and servers are well connected and close to each other but not on the same network in case that fails.
Recommendation:
First approach should do you good since it's simple to implement and requires the least amount of knowledge.
Consider second if:
You need a really small interval of checking making your application slower and network traffic higher
You have multiple applications that need the same - making load heavier on both each application, network AND the server. The second approach lets you use single ping to determine truth for all apps.
In order to limit number of request, simple solution can be use of server-sent events. This protocol used on top of HTTP allow server to push multiple updates in response of the same client request.
Client side code (javascript) :
var evtSource = new EventSource("backend.php");
evtSource.onmessage = function(e) {
console.log('status:' + e.data);
}
evtSource.onerror = function(e) {
// add some retry then display error to the user
}
Backend code (PHP, also supported by other languages)
header("Content-Type: text/event-stream\n\n");
while (1) {
// Each 30s, send OK status
echo "OK\n";
ob_flush();
flush();
sleep(30);
}
In both case it will limit number of request (only 1 per "session") but you will have 1 socket per client opened, which can be also to heavy for your server.
If you really want to lower the workload, you should delegate it to external monitoring platform which can expose API to publish backend status.
Maybe it already exists if your backend is hosted on cloud platform.
On my website, I have built a chatroom with support for multiple rooms. When a user joins the room, a session is placed into the database so that if they try to join the room again in another browser window, they are locked out.
It works like this
1. Join the chatroom page
2. Connect to chatroom #main
If the user has a session in the database for #main
--- Block user from joining
else
--- Load chatroom
When the chatroom is closed client side or the user terminates there connection with the /quit command, all of their sessions are deleted, and this works fine.
However
There is a possibility that users will just close the browser window rather than terminating their connection. The problem with this is that their session will stay in the database, meaning when they try to connect to the room, they are blocked.
I'm using this code onbeforeunload to try and prevent that
function disconnect() {
$.ajax({
url: "/remove-chat-sessions.php?global",
async: false
});
};
This is also the function called when the user types the /quit command
The problem
The problem with this is that when I reload the page, 5 times out of 10 the sessions have not been taken out of the database, as if the ajax request failed or the page reloaded before it could finish. This means that when I go back into the chatroom, the database still thinks that I am connected, and blocks me from entering the chatroom
Is there a better way to make sure that this AJAX call will load and if not, is there a better alternative than storing user sessions in an online database?
Edit:
The reason users are blocked from joining rooms more than once is because messages you post do not appear to you when the chatroom updates for new messages. They are appended to the chatroom box when you post them. This means that if users could be in the same chatroom over multiple windows, they would not be able to see the comments that they posted across all of the windows.
In this situation you could add some sort of polling. Basically, you request with javascript a page every X time. That page adds the user session to the database. Then there's a script executing every Y time, where Y > X, that cleans old sessions.
The script that is called every X time
...
// DB call (do as you like)
$All = fetch_all_recent();
foreach ($All as $Session)
{
if ($Session['time'] < time() - $y)
{
delete_session($Session['id']);
}
}
The script that javascript is calling every X time
...
delete_old_session($User->id);
add_user_session($User->id, $Chat->id, time());
The main disadvantage of this method is the increment in requests, something Apache is not so used to (for large request number). There are two non-exclusive alternatives for this, which involve access to the server, are:
Use nginx server. I have no experience in this but I've read it supports many more connections than Apache.
Use some modern form of persistent connection, like socket.io. However, it uses node.js, which can be good or bad, depending on your business.
I want to long poll a script on my server from within a phonegap app, to check for things like service messages, offers etc.
I'm using this technique in the js:
(function poll(){
$.ajax({
url: "/php/notify.php",
success: function(results){
//do stuff here
},
dataType: 'json',
complete: poll,
timeout: 30000,
});
})();
which will start a new poll every 5 minutes (will be stopping the polling when the app is 'paused' to avoid extra load)
I am not sure how to set up the php though? I can set it up so it doesnt return anything and just loops trough the script, but how to make it return a response as soon as i decide i want to send a message to the app? my php code so far is:
<?php
include 'message.php';
$counter = 1;
while($counter > 0){
//if the data variable exists (from the included file) then send the message back to the app
if($message != ''){
// Break out of while loop if we have data
break;
}
}
//if we get here weve broken out the while loop, so we have a message, but make sure
if($message != ''){
// Send data back
print(json_encode($message));
}
?>
message.php contains a $message variable (array), which normally is blank however would contain data when i want it to. The problem is, when i update the $message var in message.php, it doesnt send a response back to the app, instead it waits until it has timed out and the poll() function starts again.
so my question is, how do i set-up the php so i can update the message on my server and it be sent out instantly to anyone polling?
Long polling is actually very resource intensive for what it achieves
The problem you have is that it's constantly opening a connection every second, which in my opinion is highly inefficient. For your situation, there are two ways to achieve what you need; the preferred way being to use web sockets (I'll explain both):
Server Sent Events
To avoid your inefficient Ajax timeout code, you may want to look into Server Sent Events, an HTML5 technology designed to handle "long-polling" for you. Here's how it works:
In JS:
var source = new EventSource("/php/notify.php");
source.onmessage=function(event) {
document.getElementById("result").innerHTML+=event.data + "<br>";
};
In PHP:
You can send notifications & messages using the SSE API interface. I
don't have any code at hand, but if you want me to create an example,
I'll update this answer with it
This will cause Javascript to long-poll the endpoint (your PHP file) every second, listening for updates which have been sent by the server. Somewhat inefficient, but it works
WebSockets
Websockets are another ballgame completely, and are really great
Long-Polling & SSE's work by constantly opening new requests to the server, "listening" for any information that is generated. The problem is that this is very resource-intensive, and consequently, quite inefficient. The way around this is to open a single sustained connection called a web socket
StackOverflow, Facebook & all the other "real-time" functionality you enjoy on these services is handled with Web Sockets, and they work in exactly the same way as SSE's -- they open a connection in Javascript & listen to any updates coming from the server
Although we've never hard-coded any websocket technology, it's by far recommended you use one of the third-party socket services (for reliability & extensibility). Our favourite is Pusher
I want to send regular updates from server to client. For that I used server-sent event. I'm pasting the codes below:
Client side
Getting server updates
<script>
if(typeof(EventSource)!="undefined")
{
var source=new EventSource("demo_see.php");
source.onmessage=function(event)
{
document.getElementById("result").innerHTML=event.data + "<br>";
}
}
else
{
document.getElementById("result").innerHTML="Sorry, your browser does not support server-sent events...";
}
</script>
</body>
</html>
Server side
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
$x=rand(0,1000);
echo "data:{$x}\n\n";
flush();
?>
The code works fine but it sends updates in every 3 seconds. I want to send updates in milliseconds. I tried sleep(1) after flush() but it only increases the interval further by 1 sec. Does anyone have an Idea how I can accomplish this?
Also, can I send images using server-sent events?
As discussed in the comments above running a PHP script in an infinite loop with a sleep or a usleep is incorrect for two reasons
The browser will not see any event data (presumably it waits for the connection to close first) while that script is still running. I recall that early browser implementations of SSE allowed this but it is no longer the case.
Even if it did work browser-side you would still be faced with the issue of having a PHP script that runs excessively long (until the PHP.ini time_out settings kick in). If this happens once or twice it is OK. If there are X thousand browsers that simultaneously seek the same SSE from your server it will bring down your server.
The right way to do things is to get your PHP script to respond with event stream data and then gracefully terminate as it would normally do. Provide a retry value - in milliseconds - if you want to control when the browser tries again. Here is some sample code
function yourEventData(&$retry)
{
//do your own stuff here and return your event data.
//You might want to return a $retry value (milliseconds)
//so the browser knows when to try again (not the default 3000 ms)
}
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('Access-Control-Allow-Origin: *');//optional
$data = yourEventData($retry);
echo "data:{$str}\n\nretry:{$retry}\n\n";
As an answer to the original question this is a bit late but nevertheless in the interests of completeness:
What you get when you poll the server in this way is just data. What you do with it afterwards is entirely up to you. If you want to treat those data as an image and update an image displayed in your web page you would simply do
document.getElementById("imageID").src = "data:image/png;base64," + Your event stream data;
So much for the principles. I have on occasion forgotten that retry has to been in milliseconds and ended up returning, for example, retry:5\n\n which, much to my surprise, still worked. However, I would hesitate to use SSE to update a browser side image at 100ms intervals. A more typical usage would be along the following lines
User requests a job on the server. That job either gets queued behind other jobs or is likely to take quite a bit of time to execute (e.g. creating a PDF or an Excel spreadsheet and sending it back)
Instead of making the user wait with no feedback - and risking a timeout - one can fire up an SSE which tells the browser the ETA for the job to finish and a retry value is setup so the browser knows when to look again for a result.
The ETA is used to provide the user with some feedback
At the end of the ETA the browser will look again (browsers do this automatically so you need do nothing)
If for some reason the job is not completed by the server it should indicate that in the event stream it returns, e.g. data{"code":-1}\n\n so browser side code can deal with the situation gracefully.
There are other usage scenarios - updating stock quotes, news headlines etc. Updating images at 100ms intervals feels -a purely personal view - like a misuse of the technology.
It is now close to 5 years since I posted this answer and it still gets upvoted quite regularly. For the benefit of anyone still using it as a reference - in many ways SSE is, in my view, a rather outdated technology. With the advent of widespread support for WebSockets why bother doing SSE. Quite apart from anything else the cost of setting up and tearing down an HTTPS connection from the browser for each browser side retry is very high. The WSS protocol is far more efficient.
A spot of reading if you want to implement websockets
Client Side
Server side via PHP with Ratchet
With Nginx and NChan
To my mind PHP is not a great language to handle websockets and Ratchet is far from easy to setup. The Nginx/NChan route is far easier.
The reason for this behavior (message every 3 seconds) is explained here:
The browser attempts to reconnect to the source roughly 3 seconds after each connection is closed
So one way to get message every 100 milliseconds is changing the reconnect time: (in the PHP)
echo "retry: 100\n\n";
This is not very elegant though, better approach would be endless loop in PHP that will sleep for 100 milliseconds on each iteration. There is good example here, just changing the sleep() to usleep() to support milliseconds:
while (1) {
$x=rand(0,1000);
echo "data:{$x}\n\n";
flush();
usleep(100000); //1000000 = 1 seconds
}
I believe that the accepted answer may be misleading. Although it answers the question correctly (how to set up 1 second interval) it is not true that infinite loop is a bad approach in general.
SSE is used to get updates from the server when there actually are the updates opposed to Ajax polling that constantly checks for updates (even when there are none) in some time intervals. This can be accomplished with an infinite loop that keeps the server-side script running all the time, constantly checks for updates and echos them only if there are changes.
It is not true that:
The browser will not see any event data while that script is still running.
You can run the script on the server and still sent the updates to the browser not ending the script execution like this:
while (true) {
echo "data: test\n\n";
flush();
ob_flush();
sleep(1);
}
Doing it by sending retry parameter without infinite loop will end the script and then start the script again, end it, start again... This is similar to Ajax-polling checking for updates even if there are none and this is not how SSE is intended to work. Of course there are some situations where this approach is appropriate like it's listed in the accepted answer (for example waiting for server to create PDF and notify a client when it's done).
Using infinite loop technique will keep the script running on the server all time so you should be careful with a lot of users because you will have a script instance for each of them and it could lead to server overload. On the other hand, the same issue would happen even in some simple scenario where you suddenly get bunch of users on the website (without SSE) or if you would using Web Sockets instead of SSE. Everything has its own limitations.
Another thing to be careful about is what you put in the loop. For example, I wouldn't recommend putting database query in the loop that runs every second because then you're also putting a database at risk of overloading. I would suggest using some kind of cache (Redis or even simple text file) for this case.
SSE is an interesting technology, but one that comes with a choking side effect on implementations using APACHE/PHP backend.
When I first found out about SSE I got so excited that I replaced all Ajax polling code with SSE implementation. Only a few minutes of doing this I notice my CPU usage went up to 99/100 and the fear that my server was soon going to be brought down, forced me to revert the changes back to the friendly old Ajax polling. I love PHP and even though I knew SSE would work better on Node.is, I just wasn't ready to go that route yet!
After a period of critical thinking, I came up with an
SSE APACHE/PHP implementation that could work without literally choking my server to death.
I'm going to share my SSE server side code with you, hopefully it helps someone overcome the challenges of implementing SSE with PHP.
<?php
/* This script fetches the lastest posts in news feed */
header("Content-Type: text/event-stream");
header("Cache-Control: no-cache");
// prevent direct access
if ( ! defined("ABSPATH") ) die("");
/* push current user in session data into global space so
we can release session lock */
$GLOBALS["exported_user_id"] = user_id();
$GLOBALS["exported_user_tid"] = user_tid();
/* now release session lock having exported session data
in global space. if we don't do this, then no other scripts
will run thus causing the website to lag even when
opening in a new tab */
session_commit();
/* how long should this connection be maintained -
while we want to wait on the server long enoug for
update, holding the connection forever burn CPU
resources, depending on the server resources you have
available you can tweak this higher or lower. Typically, the
higher the closer your implementation stays as an SSE
otherwise it will be equivalent to Ajax polling. However, an
higher time burns CPU resource especially when there's
more users on your website */
$time_to_stay = strtotime("1 minute 30 seconds");
/* if no data is sent, we wait 2 seconds then abort
connection. You can use this to test when a data you
require for script operation is not passed along. Typically
SSE reconnects after 3 seconds */
if ( ! isset( $_GET["id"] ) ){
exit;
}
/* if "HTTP_LAST_EVENT_ID" is set, then this is a
continue of temporily terminated script operation. This is
important if your SSE is maintaining state you can use
the header to get last event ID sent */
$last_postid = ( ( isset(
$_SERVER["HTTP_LAST_EVENT_ID"] ) ) ? intval(
$_SERVER["HTTP_LAST_EVENT_ID"] ) :
intval( $_GET["id"] ) );
/* keep the connection active until there's data to send to
client */
while (true) {
/* You can assume this function perform some database
operations to get latest posts */
$data = fetch_newsfeed( $last_postid );
/* if data is not empty, we want to push back to the client
then there must have been some new posts to push to
client */
if ( ! empty( trim( $data ) ) ){
/* With SSE its my common practice to Json encode all
data because I notice that not doing so, sometimes
cause SSE to lose the data packet and only deliver a
handful of the data on the client. This is bad since we are
returning a structured HTML data and loosing some part
of it will cause our HTML page to break when the data is
inserted in our page */
$data = json_encode(array("result" => $data));
echo "id: $last_postid \n"; // this is the lastEventID
echo "data: $data\n\n"; // our data
/* flush to avoid waiting for script to terminate - make
sure its in the same order */
#ob_flush(); flush();
}
// the amount of time that has been spent on this script
$time_stayed = intval(floor($time_to_stay) - time());
/* if we have stayed more than time to stay, then abort
this connection to free up CPU resource */
if ( $time_stayed <= 0 ) { exit; }
/* we simply wait 5 seconds and continue again from
start . We don't want to keep pounding our DB since we
are in a tight loop so we sleep a few seconds and start
from top*/
sleep(5);
}
SSE on Nginx driven PHP websites seems to have some finer nuances. Firstly, I had to give this setting in the Location section of the Nginx configuration
fastcgi_buffering off;
Someone recommended that I change the fastcgi_read_timeout to a longer period but it did not really help... or maybe I did not dive deep enough
fastcgi_read_timeout 600s;
Both those settings are to be given in the Nginx configuration's location section.
The standard endless loop that many are recommending inside the SSE code tends to hang Nginx (or possibly PHP7.4fpm ) and that is serious; as it brings down the entire server. Though people have suggested set_time_out(0) in PHP to change the default time out (which I believe is 30 seconds) I am not very sure it is a good strategy
If you remove the endless loop entirely, the SSE system seems to work like polling: The Javascript code for EventSource keeps calling back the SSE PHP module. Which made it a bit more simpler than Ajax polling (as we don't have to write any extra code for Javascript to do that polling) but nevertheless, it is still going to keep retrying and hence is very similar to Ajax polling. And each retry is a complete reload of the PHP SSE code, so it is slower than what I finally did.
This is what worked for me. It is a hybrid solution, where there is a loop alright, but not an endless one. Once that loop is finished, the SSE PHP code terminates. That gets registered in the browser as a failure (You can see that in the inspector console) and the browser then calls the SSE code once again on the server. It is like polling, but at longer intervals.
In between one load of the SSE and the next reload, the SSE keeps working in the loop, during which additional data can be pushed into the browser. So you do have enough speed, without the headache of the entire server hanging.
<?php
$success = set_time_limit( 0 );
ini_set('auto_detect_line_endings', 1);
ini_set('max_execution_time', '0');
ob_end_clean();
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
//how fast do you want the browser to reload this SSE
//after the while loop fails:
echo "retry: 200\n\n";
//If any dynamic data comes into your application
//in this 'retry' time period, and disappears,
//then SSE will NOT be able to push that data
//If it is too short, there may be insufficient
//time to finish some work within the execution
//of one loop of the SSE while loop below
$emptyCount = 0;
$execCount = 0;
$countLimit = 60; //Experiment with this, which works for you
$emptyLimit = 5;
$prev = "";
while($execCount < $countLimit){
$execCount++;
if( connection_status() != CONNECTION_NORMAL or connection_aborted() ) break;
if(file_exists($file_path)) {
//The file is to be deleted
//so that it does not return back again
//There can be better method than one suggested here
//But not getting into it, as this is only about SSE overall
$s= file_get_contents("https://.....?f=$file_path");
if($s == "")
{
$emptyCount++;
$prev = "";
}
else {
if($s != $prev){
$prev = $s;
echo $s; //This is formatted as data:...\n\n
//as needed by SSE
}
}
//If it is continually empty then break out of the loop. Why hang around?
if($emptyCount >$emptyLimit) {
$emptyCount=0;
$prev = "";
break;
}
} else $prev = "";
#ob_flush();
#flush();
sleep(1);
}
I am trying to use periodic refresh(ajax)/polling on my site by XMLHttp(XHR) to check if a user has a new message on the database every 10 seconds, then if there is inform him/her by creating a div dynamically like this:
function shownotice() {
var divnotice = document.createElement("div");
var closelink = document.createElement("a");
closelink.onclick = this.close;
closelink.href = "#";
closelink.className = "close";
closelink.appendChild(document.createTextNode("close"));
divnotice.appendChild(closelink);
divnotice.className = "notifier";
divnotice.setAttribute("align", "center");
document.body.appendChild(divnotice);
divnotice.style.top = document.body.scrollTop + "px";
divnotice.style.left = document.body.scrollLeft + "px";
divnotice.style.display = "block";
request(divnotice);
}
Is this a reliable or stable way to check message specifically since when I look under firebug, a lot of request is going on to my database? Can this method make my database down because of too much request? Is there another way to do this since when I login to facebook and check under firebug, no request is happening or going on but I know they are using periodic refresh too... how do they do that?
You can check for new data every 10 seconds, but instead of checking the db, you need to do a lower impact check.
What I would do is modify the db update process so that when it makes a change to some data, it also updates the timestamp on a file to show that there is a recent change.
If you want better granularity than "something changed somewhere in the db" you can break it down by username (or some other identifier). The file(s) to be updated would then be the username for each user who might be interested in the update.
So, when you script asks the server if there is any information for user X newer than time t, instead of making a DB query, the server side script can just compare the timestamp of a file with the time parameter and see if there is anything new in the database.
In the process that is updating the DB, add code that (roughly) does:
foreach username interested in this update
{
touch the file \updates\username
}
Then your function to see if there is new data looks something like:
function NewDataForUser (string username, time t)
{
timestamp ts = GetLastUpdateTime("\updates\username");
return (ts > t);
}
Once you find that there is new data, you can then do a full blown DB query and get whatever information you need.
I left facebook open with firebug running and I'm seeing requests about once a minute, which seems like plenty to me.
The other approach, used by Comet, is to make a request and leave it open, with the server dribbling out data to the client without completing the response. This is a hack, and violates every principle of what HTTP is all about :). But it does work.
This is quite unreliable and probably far too taxing on the server in most cases.
Perhaps you should have a look into a push interface: http://en.wikipedia.org/wiki/Push_technology
I've heard Comet is the most scalable solution.
I suspect Facebook uses a Flash movie (they always download one called SoundPlayerHater.swf) which they use to do some comms with their servers. This does not get caught by Firebug (might be by Fiddler though).
This is not a better approach. Because you ended up querying your server in every 10 seconds even there is no real updates.
Instead of this polling approach, you can simulate the server push (reverrse AJAX or COMET) approach. This will compeletly reduce the server workload and only the client is updated if there is an update in server side.
As per wikipedia
Reverse Ajax refers to an Ajax design
pattern that uses long-lived HTTP
connections to enable low-latency
communication between a web server and
a browser. Basically it is a way of
sending data from client to server and
a mechanism for pushing server data
back to the browser.
For more info, check out my other response to the similar question