I run a couple of game tunnelling servers and would like to have a page where the client can run a ping on all the servers and find out which is the most responsive. As far as I can see there seems to be no proper way to do this in JavaScript, but I was thinking, does anybody know of a way to do this in flash or some other client browser technology maybe?
Most applet technology, including Javascript, enforces a same-origin policy. It may be possible to dynamically add DOM elements, such as images, and collect timing information using the onload event handler.
Psuedo-code
for (server in servers) {
var img = document.createElement('IMG');
server.startTime = getCurrentTimeInMS();
img.onload=function() { server.endTime = getcurrentTimeInMS(); }
img.src = server.imgUrl;
}
Then wait an appropriate time and check the timing for each server object. Repeat as needed and compute averages if you want. I'm not sure what kind of accuracy you can expect.
Disadvantages:
You are probably using the wrong tool for the job. A browser is not equipped for this sort of application.
It's probably quite inaccurate.
If the resource you request is cached it won't give you the results you want, but you can work around that by changing the url each time.
This is bandwidth-intensive compared to a normal ping. Make the image tiny, such as a spacer.gif file.
The timing depends not only on the latency of the remote server but the bandwidth of that server. This may be a more or less useful measure but it's important to note that it is not simply the latency.
You need to be able to serve HTTP requests from the various servers and, crucially, each server should serve the exact same resource (or a resource of the same length). Conditions on the server can affect the response time, such as if one server is compressing the data and another isn't.
Before the call to the server, record the Javascript time:
var startTime = new Date();
Load an image from the server:
var img = new Image()
img.onload = function() {
// record end time
}
img.src = "http://server1.domain.com/ping.jpg";
As soon as the request is finished, record the time again. (Given of course that the request didn't time out.)
var endTime = new Date();
Your ping in milliseconds is:
var ping = endTime. getTime() - startTime.getTime();
All you really need is the time from the connection start, to the time of the first readystate change...
function getPing() {
var start;
var client = getClient(); // xmlhttprequest object
client.onreadystatechange = function() {
if (client.readyState > 0) {
pingDone(start); //handle ping
client.onreadystatechange = null; //remove handler
}
}
start = new Date();
client.open("HEAD", "/ping.txt"); //static file
client.send();
}
function pingDone(start) {
done = new Date();
ms = done.valueOf() - start.valueOf();
alert(ms + "ms ping time");
}
function getClient() {
if (window.XMLHttpRequest)
return new XMLHttpRequest();
if (window.ActiveXObject)
return new ActiveXObject('MSXML2.XMLHTTP.3.0');
throw("No XMLHttpRequest Object Available.");
}
Here's an <iframe> approach:
(source: magnetiq.com)
Create a table (not necessarily in the literal <table> sense) with two columns. The first column will hold the name of servers (and possibly links to them). The second column has iframes that load probe documents from the respective servers. Each probe document does this on the initial fetch request:
Get current system time
Do a redirect (302) to a second probe document while passing the system time as a query parameter
The second probe document reads the current system time, calculates the delta from the initial reading that was passed to it and just displays it in big fat letters. This delta will be the time it took for the server to respond to the client with a redirect response plus the time it took for the client to make the second request to the redirection target. It's not exactly a "ping" but it's a comparable measure of the client's relative latency with each server. In fact, it's a "reverse ping" from the server to the client.
You'd be using iframes without infringing the same-domain policy because there's no attempt at manipulating the iframe contents at all. The player will simply see the values with his/her own eyes and you'll rely on the user glancing at the numbers and clicking on the server link that makes the most sense.
Anything that makes an HTTP request (like most of the answers here) will generally measure a latency that's at least twice of what you'd see for a normal ping, because you'll need the three way handshake and the termination packet at minimum (two round trips rather than one). If you make HTTP requests, try to keep the headers to a minimum. A long enough header (due to a chatty server, or cookies etc on the client) can add additional round trips into the mix, throwing off your measurements.
As Cherona points out, if you already have an active HTTP 2 connection to the server, or if the server speaks HTTP 3, then this may not be the case.
The most accurate option would be to open a websocket connection to each server and measure the time it takes to send a tiny message and receive a tiny response (after the connection has been established).
If you are talking about running something client side, I am not sure this is possible due to security reasons.
Maybe your best bet would be a java applet - but again this needs to be checked against local security policy.
If I try to think about some hack in JS to do this, maybe you can try to send an async request with a callback function which measures the milliseconds it took - but this is just off the top of my head.
It's not that hard to measure server response time in Flash.
Flash must ask for a policy file before accessing remote servers.
The default location for such policy file is at the root folder of the server: /crossdomain.xml
(You can easily find information about the crossdomain file format)
Since such file is needed anyway, why not use it to measure server response time? Load the file itself instead of an image and measure the time it took using getTimer() .
This will give you a good estimate on HTTP connections.
But if you're dealing with game servers, you might want to directly check the speed of the TCP connection. To do that you'll need to use the flash.net.Socket
You'll also have to ask for a policy file first by running:
Security.loadPolicyFile("xmlsocket://server.domain.com:5342");
Where 5342 represents your server's port number where it should respond with the proper XML policy string.
After making the socket connection, any request/response will let you measure different server response times.
The problem with 'file pings' is that you would evaluate the http server response whereas your target resource for the games you serve may have a very different behavior and thereby a different latency.
Just an idea out of the blue, maybe even unrealistic depending on the actual context:
but, wouldn't it be interesting to make a server script based on a short sequence of tasks typically executed by the servers during the gameplay (e.g. opening a RTMP connection, retrieving an information, sending it back). Depending on the total number of servers, you could almost opening them simultaneously and define the first response as winner (subtracting the time your client requires independently to process each query).
Of course this is a quite expensive method server-side-speaking, but at least you would hopefully get a reliable result (server and network latencies summed up). Even if it takes a couple seconds to evaluate, this would be the matter of a fraction of the total enjoyable game-play.
Based on the responses of #Mr. Shiny and #Georg Schölly , a complete and commented example.
In order to test, just copy and paste the codes below in the same order, in a empty .html, .php or other compatible file.
Before start the get, record the current Javascript time.
Using new Date(), we create a new date object with the current date and time.
<script type="text/javascript">
var startTime = new Date();
Now let's create a html object image, still without source, and attribute it to the variable img.
var img = new Image();
The next spet is put a source in the image. The .src reflects the src html attribute.
Important! Point your img.src to a very small and lightweight image file, if possible anything less than 10KB.
To prevent cache a random parameter was added at the end of file, after the .png extension.
var random_string = Math.random().toString();
img.src = "http://static.bbci.co.uk/frameworks/barlesque/5.0.0/orb/4/img/bbc-blocks-dark.png" + "?" + random_string;
Now we may call our function which will run just when the image loads, because the .onload:
img.onload = function() {
var endTime = new Date();
var ping = endTime. getTime() - startTime.getTime();
alert(img.src + " loaded in " + ping + " ms");
}
</script>
Inside the function we have the variable endTime that receives a date time after the source image was loaded.
Lastly, the ping variable receives the initial time minus the final time.
The alert popup shows the result.
Related
A web client should only expose some features when a backend API is up and running. Therefor, I'm looking for a clean way to monitor the availability of this backend.
As a quick fix, I made a timer-based function that performs a basic GET on the API root. It's not very clean, generates lots of traffic and pollutes the javascript console with errors (in case of server down).
How should one deal with such situation?
You can trigger something in the lines of this when you need it:
function checkServerStatus()
{
setServerStatus("unknown");
var img = document.body.appendChild(document.createElement("img"));
img.onload = function()
{
setServerStatus("online");
};
img.onerror = function()
{
setServerStatus("offline");
};
img.src = "http://myserver.com/ping.gif";
}
Make ping.gif small (1 pixel) to make it as fast as possible.
Ofc you can do it more smoothly by accessing the API that returns true and keeps a really small response time, but that requires you to do some coding in back-end this simply needs you to place a 1-pixel gif image in a correct directory on a server. You can use any picture already present on the server, but expect more traffic and time as image grows larger.
Now put this in some function that calls it with delay, or simply call this when you need to check status, it's up to you.
If you need a server to send to your app a notification when it's down then you need to implement this:
https://en.wikipedia.org/wiki/Push_technology
Ideally, you would have high-reliability server that has fast response rate and is really reliable to be pinging the desired server in some interval to determine whether it up then use the push to get that information to your app. This way that 3rd server would only send you a push if a status of your app server has changed. Ideally, this server's request has a high priority on your app server queue and servers are well connected and close to each other but not on the same network in case that fails.
Recommendation:
First approach should do you good since it's simple to implement and requires the least amount of knowledge.
Consider second if:
You need a really small interval of checking making your application slower and network traffic higher
You have multiple applications that need the same - making load heavier on both each application, network AND the server. The second approach lets you use single ping to determine truth for all apps.
In order to limit number of request, simple solution can be use of server-sent events. This protocol used on top of HTTP allow server to push multiple updates in response of the same client request.
Client side code (javascript) :
var evtSource = new EventSource("backend.php");
evtSource.onmessage = function(e) {
console.log('status:' + e.data);
}
evtSource.onerror = function(e) {
// add some retry then display error to the user
}
Backend code (PHP, also supported by other languages)
header("Content-Type: text/event-stream\n\n");
while (1) {
// Each 30s, send OK status
echo "OK\n";
ob_flush();
flush();
sleep(30);
}
In both case it will limit number of request (only 1 per "session") but you will have 1 socket per client opened, which can be also to heavy for your server.
If you really want to lower the workload, you should delegate it to external monitoring platform which can expose API to publish backend status.
Maybe it already exists if your backend is hosted on cloud platform.
Is these an API symmetric to Server-Sent Event to generate fire-and-forget events from browser to server? I know how to not reply to a request on the server side, but how to tell the browser that it does not need to wait for a reply?
The goal here is to save resources on the client side, say you want to send 10k events to the server as fast as possible, not caring about what the sever replies.
Edit: While mostly irrelevant to the question, here is some background about the project I'm working on which would make use of an "AJAX fire-and-forget". I want to build a JavaScript networking library for Scala.js that will have as one of its applications to be the transport layer between Akka actors on the JVM and on a browser (compiled with Scala.js). When WebSockets are not available I want to have some sort of fallback, and having a pending connection for the duration of a round trip on each JS->JVM message is not acceptable.
As you have asked for "how to tell the browser that it does not need to wait for a reply?"
I assume that you do not want to process the server reply.
in such case, it is better to utilize one pixel image response trick which is implemented by Google for analytics and tracking, and many other such services.
More details here
The trick is to create new image using javascript and set src property, the browser will immediately fire the request for image and browser can parallelly request form multiple such requests.
var image = new Image();
image.src = "your-script.php?id=123&other_params=also";
PROs:
easy to implement
less load on server/client, then ajax request
CONs:
you can send only GET requests using this appproach.
Edit
For more references:
http://help.yahoo.com/l/us/yahoo/ywa/faqs/tracking/advtrack/3520294.html
https://support.google.com/dfp_premium/answer/1347585?hl=en
How to create and implement a pixel tracking code
Again they are using same technique of pixel image.
So, just to be clear, you're trying to use the XMLHttpRequest as a proxy for your network communication, which means you are 100% at the mercy of whatever XMLHttpRequest offers you, right?
My take is that if you're going to stick with XMLHttpRequest for this, you're going to have to just make peace with getting a server response. Just make the call asynchronously and have the response handled by a no-op function. Consider what somebody else suggested, using a queue on the server (or an asynchronous method on the server) so you return immediately to the client. Otherwise, I really think JavaScript is just the wrong tool for the job you're describing.
XMLHttpRequest is going to be a different implementation (presenting a more or less common interface contract) in every browser. I mean, Microsoft invented the thing, then the other browser makers emulated it, then voila, everybody started calling it Web 2.0. Point being, if you push too hard at the doughy center of XMLHttpRequest, you may get different behavior in different browsers.
XMLHttpRequest, as far as I know, strictly uses TCP (no UDP option), so at the very least your client is going to receive a TCP ACK from the server. There is no way to tell the server not to respond at that level. It's baked into the TCP/IP network stack.
Additionally, the communication uses the HTTP protocol, so the server will respond with HTTP headers... right? I mean, that is simply the way the protocol is defined. Telling HTTP to be something different is kind of like telling a cat to bark like a chicken.
Even if you could cancel the request on the client side by calling abort() on XMLHttpRequest, you're not cancelling it on the server side. To do so, even if it were possible with XMLHttpRequest, would require an additional request sent all the way to the server to tell it to cancel the response to the preceding request. How does it know which response to cancel? You'd have to manage request id's of some kind. You would have to be resilient to out-of-order cancellation requests. Complicated.
So here's a thought (I'm just thinking out loud): Microsoft's XMLHttpRequest was based at least in spirit on an even earlier Microsoft technology from the Visual Interdev days, which used a Java applet on the client to asynchronously fire off a request to the server, then it would pass control to your preferred JavaScript callback function when the response showed up, etc. Pretty familiar.
That Java async request thing got skewered during the whole Sun vs. Microsoft lawsuit fiasco. I heard rumors that a certain original Microsoft CEO would blow a gasket any time he learned about Microsoft tech being implemented using Java, and kill the tech. Who knows? I was unhappy when that capability disappeared for a couple of years, then happy again when XMLHttpRequest eventually showed up.
Maybe you see where I'm going, here... :-)
I think perhaps you're trying to squeeze behavior out of XMLHttpRequest that it just isn't built for.
The answer might be to just write your own Java applet, do some socket programming and have it do the kind communications you want to see from it. But then, of course, you'll have issues with people not having Java enabled in their browsers, exacerbated by all the recent Java security problems. So you're looking at code-signing certificates and so on. And you're also looking at issues that you'll need to resolve on the server side. If you still use HTTP and work through your web server, the web server will still want to send HTTP responses, which will still tie up resources on the server. You could make those actions on the server asynchronous so that TCP sockets don't stay tied up longer than necessary, but you're still tying up resources on the server side.
I managed to get the expected behavior using a very small timeout of 2ms. The following call is visible by the server but the connection is closed on the client side before any reply from the server:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 2) {
alert("Response header recived, it was not a fire-and-forget...");
}
};
xhr.open("POST", "http://www.service.org/myService.svc/Method", true);
xhr.timeout = 2;
xhr.send(null);
This is not fully satisfactory because the timeout may change between browser/computers (for instance, 1ms does not work on my setup). Using a large timeout in the order of 50ms means that the client might hit the limit of maximum concurrent opened connections (6 on my setup).
Using XMLHttpRequest to send an async request (i.e. where you don't care if it succeeds or what the response is:
var req = new XMLHttpRequest();
req.open('GET', 'http://my.url.goes.here.com');
req.send();
You can do much the same thing with an Image object, too, btw:
new Image().src = 'http://my.url.goes.here.com';
The Image approach works particularly well if you're making cross-domain requests, since Images aren't subject to same-origin security restrictions the way XHR requests are. (BTW, it's good practice but not essential to have your endpoint return a 1x1 pixel PNG or GIF response with the appropriate Content-Type, to avoid browser console warnings like 'Resource interpreted as Image but transferred with MIME type text/html'.)
It sounds like you're trying to solve the wrong problem. Instead of dealing with this on the client, why not handle this on the server side.
Take the message from the client and put a message on a service bus or store the data in a database and return to the client. Depending on your stack and architecture, this should be fairly simple and very fast. You can process the message out of band, either a second service listens to the message bus and processes the request, or some sort of batch processor can come along later and process the records in the database.
You won't have the same level of fine-grained control of the connection with XHR as with WebSockets. Ultimately, it's the browser that manages the HTTP connection lifecycle.
Instead of falling back from WebSockets to discrete XHR connections, maybe you can store and batch your events. For instance:
Client JS
function sendMessage(message) {
WebSocketsAvailable ? sendWithWebSockets(message) : sendWithXHR(message);
}
var xhrQueue = [];
function sendWithXHR(message) {
xhrQueue.push({
timestamp: Date.now(), // if this matters
message: message
});
}
function flushXhrQueue() {
if (xhrQueue.length) {
var req = new XMLHttpRequest();
req.open('POST', 'http://...');
req.onload = function() { setTimeout(flushXhrQueue, 5000); };
// todo: needs to handle errors, too
req.send(JSON.stringify(xhrQueue));
xhrQueue = [];
}
else {
setTimeout(flushXhrQueue, 5000);
}
}
setTimeout(flushXhrQueue, 5000);
On the server, maybe you can have two endpoints: one for WebSockets and one for XHR. The XHR handler deserialises the JSON queue object and calls (once per message) the same handler used by the WebSockets handler.
Server pseudo-code
function WSHandler(message) {
handleMessage(message, Date.now());
}
function XHRHandler(jsonString) {
var messages = JSON.parse(jsonString);
for (var messageObj in messages) {
handleMessage(messageObj.message, messageObj.timestamp);
}
}
function handleMessage(message, timestamp) {
...
}
A node.js project with modules socket.io and express.
Now each client has a canvas, which runs animations on it. When server emit the initiate parameter, the animation can start.
Now the problem is, there is a Time Gap between clients when their animations start. The longer the animation runs, the more obvious the gap would be. The position of the figures would become really different. But what i want is everybody see the same thing on their screen。
Here's how the server deliver the data:
socket.broadcast.emit('init', initData);
socket.emit('init', initData);
The animation function is in the client, it starts when receiving the initiate data from the server.
I'm not sure if it's because the time receiving these data is different in each client.
So how to reduce this gap?
Many thanks.
I think you should try the following: make sure (using onLoad events and collecting that events on server with socket.io) that every clients downloaded animation, and then send signal to start it.
here is a simple formula and routine that works with socket.io, php or anything really.
I used it to fade in a live video stream 10 seconds before it aired. Given the inherent lag and device performance patterns, and wrong time-zones, you can only expect to get about 30ms of precision forward or backward, but to most observers, it all happens "at the same time".
here is a simulation of a server that's about two minutes behind a simulated client (you), and the server wants the client to show a popup in 1.2 seconds:
//the first two vars should come from the server:
var serverWant=+new Date() - 123456 + 1200; // what server time is the event ?
var serverTime=+new Date() - 123456; // what server time is now ?
//the rest happens in your normal event using available info:
var clientTime=+new Date(); // what client time is now ?
var off= clientTime - serverTime; // how far off is the client from the server?
var clientWant= serverWant + off; // what client time is the event ?
var waitFor = clientWant - +new Date(); // how many millis to time out for until event ?
setTimeout(function(){ alert( 'SNAP!' );}, waitFor);
how reliable is this? try changing both "- 123456"s to "+ 12345"s and see if the popup still waits 1.2 seconds to fire, despite not using Math.abs anywhere in the calculation...
in socket.io, you could send the server time and scheduled time to the client for computation in a pre-event:
socket.broadcast.emit('pre-init', {
serverTime: +new Date(),
serverWant: +new Date() + 1200
});
and then use those values and the above math to schedule the animation in a few moments or a few hours as needed, on-demand (yours) to the split second.
You need Dead Reckoning technique in order to simulate client side state as close to real state on server as possible.
You might send state packages to clients periodically, for example every 200ms (5 times a second), and on client side extrapolate from this data.
Additionally to this, you have to remember about different latency for different clients. So as you want to keep same state there is generally two approaches - interpolation (use last known and one before data), or extrapolation (use last known and predict in future based on own latency).
Extrapolation suits better for real-time interactive stuff, but will have problems with error correction - when client will do wrong prediction (object suddenly stopped but based on delay client predicted it still moved).
Interpolation would make everything pretty much delayed and in a past, but will not suffer from errors as there is no predictions. The drawback of this as you need to wait before interpolating amount of time equal to slowest latency user. This means that slower user will force everyone to be slowed down as well.
I want to gather some information using the visitors of my websites.
What I need is for each visitor to ping 3 different hostnames and then save the following info into a DB.
Visitor IP, latency 1,latency 2, latency 3
Of course everything has to be transparent for the visitor without interrupting him in any way.
Is this possible? Can you give me an example? Are there any plugins for jQuery or something to make it easier
EDIT
This is what I have so far jsfiddle.net/dLVG6 but the data is too random. It jumps from 50 to 190
This is going to be more of a pain that you might think.
Your first problem is that Javascript doesn't have ping. Mostly what Javascript is good at is HTTP and a few cousin protocols.
Second problem is that you can't just issue some ajax requests and time the results (that would be way too obvious). The same origin policy will prevent you from using ajax to talk to servers other than the one the page came from. You'll need to use JSONP, or change the src of an image tag, or something else more indirect.
Your third problem is that you don't want to do anything that will result in a lot of data being returned. You don't want data transfer time or extensive server processing to interfere with measuring latency.
Fourth, you can't ask for URLs that might be cached. If the object happened to be in the cache, you would get really low "latency" measurements but it wouldn't be meaningful.
My solution was to use an image tag with no src attribute. On document load, set the src to point to a valid server but use an invalid port. Generally, it is faster for a server to simply reject your connection than to generate a proper 404 error response. All you have to do then is measure how long it takes to get the error event from the image.
From The Filddle:
var start = new Date().getTime();
$('#junkOne').attr('src', 'http://fate.holmes-cj.com:8886/').error(function () {
var end = new Date().getTime();
$('#timer').html("" + (end - start) + "ms");
});
The technique could probably be improved. Here's some ideas:
Use IP address instead of DNS host name.
Do the "ping" multiple times, throw out the highest and lowest scores, then average the rest.
If your web page has a lot heavy processing going on, try to do the tests when you think the UI load is lightest.
With jQuery you could:
$.ajax(url,settings)(http://api.jquery.com/jQuery.ajax/) and take the time from beforeSend and on complete via Date.now(), subtract those times -> then you have the time for the request (not excactly the "Ping" though)
2021:
Tried this again for a React app I'm building. I don't think the accuracy is too great.
const ping = () => {
var start = new Date().getTime();
api.get('/ping').then((res) => {
console.log(res)
var end = new Date().getTime();
console.log(`${end-start} ms`)
}, (err) => {
console.log(err)
})
};
Wrote my own little API, but I suppose there's just way too much going on during the request.
In terminal, I get about 23ms ping to my server.. using this it shoots up to like 200-500ms.
I am trying to use periodic refresh(ajax)/polling on my site by XMLHttp(XHR) to check if a user has a new message on the database every 10 seconds, then if there is inform him/her by creating a div dynamically like this:
function shownotice() {
var divnotice = document.createElement("div");
var closelink = document.createElement("a");
closelink.onclick = this.close;
closelink.href = "#";
closelink.className = "close";
closelink.appendChild(document.createTextNode("close"));
divnotice.appendChild(closelink);
divnotice.className = "notifier";
divnotice.setAttribute("align", "center");
document.body.appendChild(divnotice);
divnotice.style.top = document.body.scrollTop + "px";
divnotice.style.left = document.body.scrollLeft + "px";
divnotice.style.display = "block";
request(divnotice);
}
Is this a reliable or stable way to check message specifically since when I look under firebug, a lot of request is going on to my database? Can this method make my database down because of too much request? Is there another way to do this since when I login to facebook and check under firebug, no request is happening or going on but I know they are using periodic refresh too... how do they do that?
You can check for new data every 10 seconds, but instead of checking the db, you need to do a lower impact check.
What I would do is modify the db update process so that when it makes a change to some data, it also updates the timestamp on a file to show that there is a recent change.
If you want better granularity than "something changed somewhere in the db" you can break it down by username (or some other identifier). The file(s) to be updated would then be the username for each user who might be interested in the update.
So, when you script asks the server if there is any information for user X newer than time t, instead of making a DB query, the server side script can just compare the timestamp of a file with the time parameter and see if there is anything new in the database.
In the process that is updating the DB, add code that (roughly) does:
foreach username interested in this update
{
touch the file \updates\username
}
Then your function to see if there is new data looks something like:
function NewDataForUser (string username, time t)
{
timestamp ts = GetLastUpdateTime("\updates\username");
return (ts > t);
}
Once you find that there is new data, you can then do a full blown DB query and get whatever information you need.
I left facebook open with firebug running and I'm seeing requests about once a minute, which seems like plenty to me.
The other approach, used by Comet, is to make a request and leave it open, with the server dribbling out data to the client without completing the response. This is a hack, and violates every principle of what HTTP is all about :). But it does work.
This is quite unreliable and probably far too taxing on the server in most cases.
Perhaps you should have a look into a push interface: http://en.wikipedia.org/wiki/Push_technology
I've heard Comet is the most scalable solution.
I suspect Facebook uses a Flash movie (they always download one called SoundPlayerHater.swf) which they use to do some comms with their servers. This does not get caught by Firebug (might be by Fiddler though).
This is not a better approach. Because you ended up querying your server in every 10 seconds even there is no real updates.
Instead of this polling approach, you can simulate the server push (reverrse AJAX or COMET) approach. This will compeletly reduce the server workload and only the client is updated if there is an update in server side.
As per wikipedia
Reverse Ajax refers to an Ajax design
pattern that uses long-lived HTTP
connections to enable low-latency
communication between a web server and
a browser. Basically it is a way of
sending data from client to server and
a mechanism for pushing server data
back to the browser.
For more info, check out my other response to the similar question