I am currently writing the communication framework for a web game, the communications map can be seen below: The code is as follows:
test.php:
<!DOCTYPE html>
<html>
<head>
<title> Test </title>
<script>
function init()
{
var source = new EventSource("massrelay.php");
source.onmessage = function(event)
{
console.log("massrelay sent: " + event.data);
var p = document.createElement("p");
var t = document.createTextNode(event.data);
p.appendChild(t);
document.getElementById("rec").appendChild(p);
};
}
function test()
{
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function ()
{
if(xhr.readyState === XMLHttpRequest.DONE && xhr.status === 200)
{
console.log("reciver responded: " + xhr.responseText);
}
}
xhr.open("GET", "reciver.php?d=" + document.getElementById("inp").value , true);
xhr.send();
console.log("you sent: " + document.getElementById("inp").value);
}
</script>
</head>
<body>
<button onclick="init()">Start Test</button>
<textarea id="inp"></textarea>
<button onclick="test()">click me</button>
<div id="rec"></div>
</body>
</html>
This takes user input (currently a textbox for testing) and sends it to the receiver, and writes back what the receivers response to the console, i have never received an error from the receiver. it also adds an event listener for the SSE that is sent.
reciver.php:
<?php
$data = $_REQUEST["d"];
(file_put_contents("data.txt", $data)) ? echo $data : echo "error writing";
?>
This as you can see is very simple and only functions to write the data to data.txt before sending back that the write was successful. data.txt is simply the "tube" data is passed through to massrelay.php.
massrelay.php:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while(1)
{
$data = file_get_contents("data.txt");
if ($data != "NULL")
{
echo "data: " . $data . "\n\n";
flush();
file_put_contents("data.txt", "NULL");
}
}
?>
massrelay.php checks if there is any data in data.txt and if so will pass it using SSE to anyone with an event listener for it, once it reads the data it will clear the data file.
The entire thing actually works perfectly except for the slight ishue that it can take anywhere from 30 seconds to 10 minutes for massrelay.php to send the data from the data file. For a web game this is completely unacceptable as you need real time action. I was wondering if it was taking so long due to a flaw in my code or if not im thinking hardware (Hosting it myself on a 2006 Dell with a sempron). If anyone sees anything wrong with it please let me know thanks.
Three problems I see with your code:
No sleep
No ob_flush
Sessions
Your while() loop is constantly reading the file system. You need to slow it down. I've put a half second sleep in the below; experiment with the largest value for acceptable latency.
PHP has its own output buffers. You use #ob_flush() to flush them (the # suppresses errors) and flush() to flush the Apache buffers. Both are needed, and the order is important, too.
Finally, PHP sessions lock, so if your clients might be sending session cookies, even if your SSE script does not use the session data, you must close the session before entering the infinite loop.
I've added all three of those changes to your code, below.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
session_write_close();
while(1)
{
$data = file_get_contents("data.txt");
if ($data != "NULL")
{
echo "data: " . $data . "\n\n";
#ob_flush();flush();
file_put_contents("data.txt", "NULL");
}
usleep(500000);
}
BTW, the advice in the other answer about using an in-memory database is good, but the file system overhead is in milliseconds, so it won't explain a "30 second to 10 minute" latency.
I don't know that writing to a flat file is the best way to do this. File I/O is going to be your big bottleneck here (reading on top of writing means you'll reach that max really quick). But assuming you want to keep on doing it...
Your application could benefit from a PHP session, to store some data so you're not waiting on I/O. This is where an intermediate software like Memcached or Redis could also help you. What you would do is store the data from reciver.php in your text file AND write it into memory cache (or put it into your session which writes to the memory store). This makes retrieval very quick and reduces file I/O.
I would highly suggest a database for your data tho. MySQL in particular will load commonly accessed data into memory to speed read operations.
Years ago I experimented with flat-file & also storing data in a DB for communication between multiple concurrent users to a server (this was for a Flash game but the same principals apply).
Flat-file offers worst performance as you will eventually run into read/write access issues.
With DB it will eventually also fall over with too many requests especially if you are hitting the DB thousands of times a second and there's no load balancing in place.
My answer is not solving your current problem but steering you in a different direction. You really gotta look at using a socket server. Maybe look into something like: https://github.com/reactphp/socket
A few issues you may experience with using a socket server is the fact that shared hosts don't allow you to run shell scripts. My solution was to use my home PC for the socket communication and use my domain as a public entry point for the hosted game. Obviously we don't all have static IPs to point our games to so I had to use dyndns and back then it was free: http://dyn.com (there may be some other new services that are now free). With using a home server you will also need to setup your router for port forwarding to send any specific port requests on your IP/router to your LAN server. Make sure you are running firewalls on both the router and the server to protect your other potentially exposed ports.
I know this maybe seems complicated but trust me it's the most optimal solution. If you need any help PM me and I can try guide you through any issues you may experience.
EDIT: I deleted this answer as OP is saying that my suggested test 1. (see below) is working fine so my theory about output buffering is wrong. But on other hand he is saying the same code with native functions fread fwrite fclose flock doesnt work, so if buffering and file I/O is not a solution i don't know what is it. I removed my post because I don't thinks it's a valid answer. Let me sum this up:
error display is enabled E_ALL
flush is working fine
OP says he used native file functions properly fopen fread fwrite flock and it doesn't help.
If flush is working, file system is working, I can't do anything but trust OP he is right and give up.
So right know my job here is done, I can't help if I can't try it by myself on OP's system, configuration and code.
I undeleted my answer so OP can have links to docs and other people can see my attempt to make a solution.
MY OLD POST I DELETED
1. Work on test massrelay.php
while(true) {
echo "test!";
sleep(1);
}
so you'll be sure that problem is not file related.
2. Make sure you have error_reporting and display_errors enabled.
I am guessing you get response after 30 seconds because PHP script is being terminated after time limit. if you would have errors enabled you would see error message informing you about that.
3. Make sure you actually flush your output and it's not buffered.
that it can take anywhere from 30 seconds to 10 minutes
You being able to see data after 30 seconds make sense because 30 seconds is default value for max execution time in PHP.
It looks like flush() is not working in your screnario, and you should check output_buffering setting in your php.ini file
Please see this: php flush not working
Documentation:
http://php.net/manual/en/function.flush.php
http://php.net/manual/en/function.ob-flush.php
http://php.net/manual/en/book.outcontrol.php
http://php.net/manual/en/outcontrol.configuration.php
In one of several individual instances of having to debug SSE I discovered that if (ob_get_level() > 0) {ob_end_clean();} was causing the issue ironically. That is the code you need to prevent PHP errors from spawning if there aren't any levels. Reverting back to ob_end_clean(); solved the problem.
Related
I have a simple long poll request on my website to check if there are any new notifications available for a user. As far as the request goes, everything seems to work flawlessly; the request is only fulfilled once the database is updated (and a notification has been created for that specific user), and a new request is sent out straight after.
The Problem
What I have noticed is that when the request is waiting for a response from the database (as long polls should), all other requests to the server will also hang with it - whether it be media files, AJAX requests or even new pages loading. This means that all requests to the server will hang until I close my browser and reopen it.
What is even stranger is that if I visit another one of my localhost sites (my long poll is on a MAMP virtualhost site, www.example.com), their is no problem and I can still use them as if nothing has happened - despite the fact they're technically on the same server.
My Code
This is what I have on my client side (longpoll.js):
window._Notification = {
listen: function(){
/* this is not jQuery's Ajax, if you're going to comment any suggestions,
* please ensure you comment based on regular XMLHttpRequest's and avoid
* using any suggestions that use jQuery */
xhr({
url: "check_notifs.php",
dataType: "json",
success: function(res){
/* this will log the correct response as soon as the server is
* updated */
console.log(res);
_Notification.listen();
}
});
}, init: function(){
this.listen();
}
}
/* after page load */
_Notification.init();
And this is what I have on my server side (check_notifs.php):
header("Content-type: application/json;charset=utf-8", false);
if(/* user is logged in */){
$_CHECKED = $user->get("last_checked");
/* update the last time they checked their notifications */
$_TIMESTAMP = time();
$user->update("last_checked", $_TIMESTAMP);
/* give the server a temporary break */
sleep(1);
/* here I am endlessly looping until the conditions are met, sleeping every
* iteration to reduce server stress */
$_PDO = new PDO('...', '...', '...');
while(true){
$query = $_PDO->prepare("SELECT COUNT(*) as total FROM table WHERE timestamp > :unix");
if($query->execute([":unix" => $_CHECKED])){
if($query->rowCount()){
/* check if the database has updated and if it has, break out of
* the while loop */
$total = $query->fetchAll(PDO::FETCH_OBJ)[0]->total;
if($total > 0){
echo json_encode(["total" => $total]);
break;
}
/* if the database hasn't updated, sleep the script for one second,
* then check if it has updated again */
sleep(1);
continue;
}
}
}
}
/* for good measure */
exit;
I have read about NodeJS and various other frameworks that are suggested for long-polling, but unfortunately they're currently out of reach for me and I'm forced to use PHP. I have also had a look around to see if anything in the Apache configuration could solve my problem, but I only came across How do you increase the max number of concurrent connections in Apache?, and what's mentioned doesn't seem like it would be the problem considering I can still use my other localhost website on the same server.
Really confused as to how I can solve this issue, so all help is appreciated, Cheers.
What is actually happening is that php is waiting for this script to end (locked) to serve the next requests to the same file.
As you can read here:
there is some lock somewhere -- which can happen, for instance, if the two requests come from the same client, and you are using
file-based sessions in PHP : while a script is being executed, the
session is "locked", which means the server/client will have to wait
until the first request is finished (and the file unlocked) to be able
to use the file to open the session for the second user.
the requests come from the same client AND the same browser; most browsers will queue the requests in this case, even when there is
nothing server-side producing this behaviour.
there are more than MaxClients currently active processes -- see the quote from Apache's manual just before.
There's actually some kind of lock somewhere. You need to check what lock is happening. Maybe $_PDO is having the lock and you must close it before the sleep(1) to keep it unlocked until you make the next request.
You can try to raise your MaxClients and/or apply this answer
Perform session_write_close() (or corresponding function in cakephp) to close the session in the begin of the ajax endpoint.
I want to long poll a script on my server from within a phonegap app, to check for things like service messages, offers etc.
I'm using this technique in the js:
(function poll(){
$.ajax({
url: "/php/notify.php",
success: function(results){
//do stuff here
},
dataType: 'json',
complete: poll,
timeout: 30000,
});
})();
which will start a new poll every 5 minutes (will be stopping the polling when the app is 'paused' to avoid extra load)
I am not sure how to set up the php though? I can set it up so it doesnt return anything and just loops trough the script, but how to make it return a response as soon as i decide i want to send a message to the app? my php code so far is:
<?php
include 'message.php';
$counter = 1;
while($counter > 0){
//if the data variable exists (from the included file) then send the message back to the app
if($message != ''){
// Break out of while loop if we have data
break;
}
}
//if we get here weve broken out the while loop, so we have a message, but make sure
if($message != ''){
// Send data back
print(json_encode($message));
}
?>
message.php contains a $message variable (array), which normally is blank however would contain data when i want it to. The problem is, when i update the $message var in message.php, it doesnt send a response back to the app, instead it waits until it has timed out and the poll() function starts again.
so my question is, how do i set-up the php so i can update the message on my server and it be sent out instantly to anyone polling?
Long polling is actually very resource intensive for what it achieves
The problem you have is that it's constantly opening a connection every second, which in my opinion is highly inefficient. For your situation, there are two ways to achieve what you need; the preferred way being to use web sockets (I'll explain both):
Server Sent Events
To avoid your inefficient Ajax timeout code, you may want to look into Server Sent Events, an HTML5 technology designed to handle "long-polling" for you. Here's how it works:
In JS:
var source = new EventSource("/php/notify.php");
source.onmessage=function(event) {
document.getElementById("result").innerHTML+=event.data + "<br>";
};
In PHP:
You can send notifications & messages using the SSE API interface. I
don't have any code at hand, but if you want me to create an example,
I'll update this answer with it
This will cause Javascript to long-poll the endpoint (your PHP file) every second, listening for updates which have been sent by the server. Somewhat inefficient, but it works
WebSockets
Websockets are another ballgame completely, and are really great
Long-Polling & SSE's work by constantly opening new requests to the server, "listening" for any information that is generated. The problem is that this is very resource-intensive, and consequently, quite inefficient. The way around this is to open a single sustained connection called a web socket
StackOverflow, Facebook & all the other "real-time" functionality you enjoy on these services is handled with Web Sockets, and they work in exactly the same way as SSE's -- they open a connection in Javascript & listen to any updates coming from the server
Although we've never hard-coded any websocket technology, it's by far recommended you use one of the third-party socket services (for reliability & extensibility). Our favourite is Pusher
I'm trying to determine how to setup a web socket for the first time ever so a working minimal example with static variables (IP address for example instead of getservbyname) will help me understand what is flowing where.
I want to do this the right way so no frameworks or addons for both the client and the server. I want to use PHP's native web sockets as described here though without over-complicating things with in-depth classes...
http://www.php.net/manual/en/intro.sockets.php
I've already put together some basic JavaScript...
window.onload = function(e)
{
if ('WebSocket' in window)
{
var socket = new WebSocket('ws://'+path.split('http://')[1]+'mail/');
socket.onopen = function () {alert('Web Socket: connected.');}
socket.onmessage = function (event) {alert('Web Socket: '+event.data);}
}
}
It's the PHP part that I'm not really sure about. Presuming we have a blank PHP file...
If necessary how do I determine if my server's PHP install has this socket functionality already available?
Is the request essentially handled as a GET or POST request in
example?
Do I need to worry about the port numbers? e.g. if
($_SERVER['SERVER_PORT']=='8080')
How do I return a basic message on the initial connection?
How do I return a basic message say, five seconds later?
It's not that simple to create a simple example, I'm afraid.
First of all you need to check in php configuration if the server is configured for sockets with the setting enable-sockets
Then you need to implement (or find) a websocket server that at least follows the Hybi10 specification (https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-10) of websockets. If you find the "magic number" 258EAFA5-E914-47DA-95CA-C5AB0DC85B11 in the code for the header, you can be sure it does follow at least Hybi06 ...
Finally, you need to have access to an admin console on the server in order to execute the PHP websocket server using php -q server.php
EDIT: This is the one I've been using a year ago ... it might still work as expected with current browsers supporting Websockets: http://code.google.com/p/phpwebsocket/source/browse/trunk/+phpwebsocket/?r=5
I want to send regular updates from server to client. For that I used server-sent event. I'm pasting the codes below:
Client side
Getting server updates
<script>
if(typeof(EventSource)!="undefined")
{
var source=new EventSource("demo_see.php");
source.onmessage=function(event)
{
document.getElementById("result").innerHTML=event.data + "<br>";
}
}
else
{
document.getElementById("result").innerHTML="Sorry, your browser does not support server-sent events...";
}
</script>
</body>
</html>
Server side
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
$x=rand(0,1000);
echo "data:{$x}\n\n";
flush();
?>
The code works fine but it sends updates in every 3 seconds. I want to send updates in milliseconds. I tried sleep(1) after flush() but it only increases the interval further by 1 sec. Does anyone have an Idea how I can accomplish this?
Also, can I send images using server-sent events?
As discussed in the comments above running a PHP script in an infinite loop with a sleep or a usleep is incorrect for two reasons
The browser will not see any event data (presumably it waits for the connection to close first) while that script is still running. I recall that early browser implementations of SSE allowed this but it is no longer the case.
Even if it did work browser-side you would still be faced with the issue of having a PHP script that runs excessively long (until the PHP.ini time_out settings kick in). If this happens once or twice it is OK. If there are X thousand browsers that simultaneously seek the same SSE from your server it will bring down your server.
The right way to do things is to get your PHP script to respond with event stream data and then gracefully terminate as it would normally do. Provide a retry value - in milliseconds - if you want to control when the browser tries again. Here is some sample code
function yourEventData(&$retry)
{
//do your own stuff here and return your event data.
//You might want to return a $retry value (milliseconds)
//so the browser knows when to try again (not the default 3000 ms)
}
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('Access-Control-Allow-Origin: *');//optional
$data = yourEventData($retry);
echo "data:{$str}\n\nretry:{$retry}\n\n";
As an answer to the original question this is a bit late but nevertheless in the interests of completeness:
What you get when you poll the server in this way is just data. What you do with it afterwards is entirely up to you. If you want to treat those data as an image and update an image displayed in your web page you would simply do
document.getElementById("imageID").src = "data:image/png;base64," + Your event stream data;
So much for the principles. I have on occasion forgotten that retry has to been in milliseconds and ended up returning, for example, retry:5\n\n which, much to my surprise, still worked. However, I would hesitate to use SSE to update a browser side image at 100ms intervals. A more typical usage would be along the following lines
User requests a job on the server. That job either gets queued behind other jobs or is likely to take quite a bit of time to execute (e.g. creating a PDF or an Excel spreadsheet and sending it back)
Instead of making the user wait with no feedback - and risking a timeout - one can fire up an SSE which tells the browser the ETA for the job to finish and a retry value is setup so the browser knows when to look again for a result.
The ETA is used to provide the user with some feedback
At the end of the ETA the browser will look again (browsers do this automatically so you need do nothing)
If for some reason the job is not completed by the server it should indicate that in the event stream it returns, e.g. data{"code":-1}\n\n so browser side code can deal with the situation gracefully.
There are other usage scenarios - updating stock quotes, news headlines etc. Updating images at 100ms intervals feels -a purely personal view - like a misuse of the technology.
It is now close to 5 years since I posted this answer and it still gets upvoted quite regularly. For the benefit of anyone still using it as a reference - in many ways SSE is, in my view, a rather outdated technology. With the advent of widespread support for WebSockets why bother doing SSE. Quite apart from anything else the cost of setting up and tearing down an HTTPS connection from the browser for each browser side retry is very high. The WSS protocol is far more efficient.
A spot of reading if you want to implement websockets
Client Side
Server side via PHP with Ratchet
With Nginx and NChan
To my mind PHP is not a great language to handle websockets and Ratchet is far from easy to setup. The Nginx/NChan route is far easier.
The reason for this behavior (message every 3 seconds) is explained here:
The browser attempts to reconnect to the source roughly 3 seconds after each connection is closed
So one way to get message every 100 milliseconds is changing the reconnect time: (in the PHP)
echo "retry: 100\n\n";
This is not very elegant though, better approach would be endless loop in PHP that will sleep for 100 milliseconds on each iteration. There is good example here, just changing the sleep() to usleep() to support milliseconds:
while (1) {
$x=rand(0,1000);
echo "data:{$x}\n\n";
flush();
usleep(100000); //1000000 = 1 seconds
}
I believe that the accepted answer may be misleading. Although it answers the question correctly (how to set up 1 second interval) it is not true that infinite loop is a bad approach in general.
SSE is used to get updates from the server when there actually are the updates opposed to Ajax polling that constantly checks for updates (even when there are none) in some time intervals. This can be accomplished with an infinite loop that keeps the server-side script running all the time, constantly checks for updates and echos them only if there are changes.
It is not true that:
The browser will not see any event data while that script is still running.
You can run the script on the server and still sent the updates to the browser not ending the script execution like this:
while (true) {
echo "data: test\n\n";
flush();
ob_flush();
sleep(1);
}
Doing it by sending retry parameter without infinite loop will end the script and then start the script again, end it, start again... This is similar to Ajax-polling checking for updates even if there are none and this is not how SSE is intended to work. Of course there are some situations where this approach is appropriate like it's listed in the accepted answer (for example waiting for server to create PDF and notify a client when it's done).
Using infinite loop technique will keep the script running on the server all time so you should be careful with a lot of users because you will have a script instance for each of them and it could lead to server overload. On the other hand, the same issue would happen even in some simple scenario where you suddenly get bunch of users on the website (without SSE) or if you would using Web Sockets instead of SSE. Everything has its own limitations.
Another thing to be careful about is what you put in the loop. For example, I wouldn't recommend putting database query in the loop that runs every second because then you're also putting a database at risk of overloading. I would suggest using some kind of cache (Redis or even simple text file) for this case.
SSE is an interesting technology, but one that comes with a choking side effect on implementations using APACHE/PHP backend.
When I first found out about SSE I got so excited that I replaced all Ajax polling code with SSE implementation. Only a few minutes of doing this I notice my CPU usage went up to 99/100 and the fear that my server was soon going to be brought down, forced me to revert the changes back to the friendly old Ajax polling. I love PHP and even though I knew SSE would work better on Node.is, I just wasn't ready to go that route yet!
After a period of critical thinking, I came up with an
SSE APACHE/PHP implementation that could work without literally choking my server to death.
I'm going to share my SSE server side code with you, hopefully it helps someone overcome the challenges of implementing SSE with PHP.
<?php
/* This script fetches the lastest posts in news feed */
header("Content-Type: text/event-stream");
header("Cache-Control: no-cache");
// prevent direct access
if ( ! defined("ABSPATH") ) die("");
/* push current user in session data into global space so
we can release session lock */
$GLOBALS["exported_user_id"] = user_id();
$GLOBALS["exported_user_tid"] = user_tid();
/* now release session lock having exported session data
in global space. if we don't do this, then no other scripts
will run thus causing the website to lag even when
opening in a new tab */
session_commit();
/* how long should this connection be maintained -
while we want to wait on the server long enoug for
update, holding the connection forever burn CPU
resources, depending on the server resources you have
available you can tweak this higher or lower. Typically, the
higher the closer your implementation stays as an SSE
otherwise it will be equivalent to Ajax polling. However, an
higher time burns CPU resource especially when there's
more users on your website */
$time_to_stay = strtotime("1 minute 30 seconds");
/* if no data is sent, we wait 2 seconds then abort
connection. You can use this to test when a data you
require for script operation is not passed along. Typically
SSE reconnects after 3 seconds */
if ( ! isset( $_GET["id"] ) ){
exit;
}
/* if "HTTP_LAST_EVENT_ID" is set, then this is a
continue of temporily terminated script operation. This is
important if your SSE is maintaining state you can use
the header to get last event ID sent */
$last_postid = ( ( isset(
$_SERVER["HTTP_LAST_EVENT_ID"] ) ) ? intval(
$_SERVER["HTTP_LAST_EVENT_ID"] ) :
intval( $_GET["id"] ) );
/* keep the connection active until there's data to send to
client */
while (true) {
/* You can assume this function perform some database
operations to get latest posts */
$data = fetch_newsfeed( $last_postid );
/* if data is not empty, we want to push back to the client
then there must have been some new posts to push to
client */
if ( ! empty( trim( $data ) ) ){
/* With SSE its my common practice to Json encode all
data because I notice that not doing so, sometimes
cause SSE to lose the data packet and only deliver a
handful of the data on the client. This is bad since we are
returning a structured HTML data and loosing some part
of it will cause our HTML page to break when the data is
inserted in our page */
$data = json_encode(array("result" => $data));
echo "id: $last_postid \n"; // this is the lastEventID
echo "data: $data\n\n"; // our data
/* flush to avoid waiting for script to terminate - make
sure its in the same order */
#ob_flush(); flush();
}
// the amount of time that has been spent on this script
$time_stayed = intval(floor($time_to_stay) - time());
/* if we have stayed more than time to stay, then abort
this connection to free up CPU resource */
if ( $time_stayed <= 0 ) { exit; }
/* we simply wait 5 seconds and continue again from
start . We don't want to keep pounding our DB since we
are in a tight loop so we sleep a few seconds and start
from top*/
sleep(5);
}
SSE on Nginx driven PHP websites seems to have some finer nuances. Firstly, I had to give this setting in the Location section of the Nginx configuration
fastcgi_buffering off;
Someone recommended that I change the fastcgi_read_timeout to a longer period but it did not really help... or maybe I did not dive deep enough
fastcgi_read_timeout 600s;
Both those settings are to be given in the Nginx configuration's location section.
The standard endless loop that many are recommending inside the SSE code tends to hang Nginx (or possibly PHP7.4fpm ) and that is serious; as it brings down the entire server. Though people have suggested set_time_out(0) in PHP to change the default time out (which I believe is 30 seconds) I am not very sure it is a good strategy
If you remove the endless loop entirely, the SSE system seems to work like polling: The Javascript code for EventSource keeps calling back the SSE PHP module. Which made it a bit more simpler than Ajax polling (as we don't have to write any extra code for Javascript to do that polling) but nevertheless, it is still going to keep retrying and hence is very similar to Ajax polling. And each retry is a complete reload of the PHP SSE code, so it is slower than what I finally did.
This is what worked for me. It is a hybrid solution, where there is a loop alright, but not an endless one. Once that loop is finished, the SSE PHP code terminates. That gets registered in the browser as a failure (You can see that in the inspector console) and the browser then calls the SSE code once again on the server. It is like polling, but at longer intervals.
In between one load of the SSE and the next reload, the SSE keeps working in the loop, during which additional data can be pushed into the browser. So you do have enough speed, without the headache of the entire server hanging.
<?php
$success = set_time_limit( 0 );
ini_set('auto_detect_line_endings', 1);
ini_set('max_execution_time', '0');
ob_end_clean();
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
header('X-Accel-Buffering: no');
//how fast do you want the browser to reload this SSE
//after the while loop fails:
echo "retry: 200\n\n";
//If any dynamic data comes into your application
//in this 'retry' time period, and disappears,
//then SSE will NOT be able to push that data
//If it is too short, there may be insufficient
//time to finish some work within the execution
//of one loop of the SSE while loop below
$emptyCount = 0;
$execCount = 0;
$countLimit = 60; //Experiment with this, which works for you
$emptyLimit = 5;
$prev = "";
while($execCount < $countLimit){
$execCount++;
if( connection_status() != CONNECTION_NORMAL or connection_aborted() ) break;
if(file_exists($file_path)) {
//The file is to be deleted
//so that it does not return back again
//There can be better method than one suggested here
//But not getting into it, as this is only about SSE overall
$s= file_get_contents("https://.....?f=$file_path");
if($s == "")
{
$emptyCount++;
$prev = "";
}
else {
if($s != $prev){
$prev = $s;
echo $s; //This is formatted as data:...\n\n
//as needed by SSE
}
}
//If it is continually empty then break out of the loop. Why hang around?
if($emptyCount >$emptyLimit) {
$emptyCount=0;
$prev = "";
break;
}
} else $prev = "";
#ob_flush();
#flush();
sleep(1);
}
I have a web application that I am trying to make more efficient by reducing the number of database queries that it runs. I am inclined to implement some type of Comet style solution but my lack of experience in this department makes me wonder if a more simple solution exists.
For the sake of brevity, let's just say that I have a database that contains a list of systems on a network and their current status (whether they are up or down). A user can sign into the web app and select which systems she is interested in monitoring. After which she can visit the monitoring page which displays the number of systems that are currently down.
As of now the count is refreshed using Ajax... every minute the client sends a request to the server which in turn runs a query against the database to get the current count and returns the result to the client. I know this in inefficient; for every client that logs in, another query is run against the database every minute. O(n) = bad!
I know that I can use some type of caching, such as memcached, but it still means there is a request for every user every minute. Better, but I still feel as if it's not the best solution.
I envision something more like this:
Every minute the server runs a query to pull a count for all the systems that are currently down.
The server then pushes this data to the interested clients.
That way it doesn't matter how many users are logged in and watching the monitoring page, the server only ever runs one query per minute. O(1) = good! The problem is that even after all of the research I've done I can't quite figure out how to implement this. To be honest I don't completely understand what it is that I am looking for, so that makes it very difficult to research a solution. So I am hoping that more enlightened developers can lead me in the right direction.
The solution to this issue can easily be solved with an app called Pusher, which is a hosted publish/subscribe API. In a nut shell, Pusher provides two libraries, one for the client (the subscriber) and one for the server (the publisher).
The publisher can be a single script on your server (there are quite a few languages available) set to run at whatever interval you desire. Each time it runs it will connect to a channel and publish to it whatever data it generates. The client is created via a bit of JavaScript in your web app, and whenever a user navigates to your page, the client subscribes to the same channel your server script is publishing to and receives the data as soon as it becomes available and then can manipulate it however you see fit.
The server:
#!/usr/bin/php
<?php
require('Pusher.php');
$dbh = new PDO("mysql:host=$db_host;dbname=$db_name", $db_user, $db_pass);
foreach($dbh->query('SELECT hostname FROM systems WHERE status = 0') as $row) {
$systems[] = $row['hostname'];
}
$pusher = new Pusher($pusher_key, $pusher_secret, $pusher_app_id);
$pusher->trigger(
'my-channel',
'my-event',
array('message' => implode('<br />', $systems))
);
The client:
<!DOCTYPE html>
<html>
<head>
<title>Pusher Test</title>
<script src="http://code.jquery.com/jquery.min.js" type="text/javascript"></script>
<script src="http://js.pusher.com/1.12/pusher.min.js" type="text/javascript"></script>
<script type="text/javascript">
var pusher = new Pusher(key);
var channel = pusher.subscribe('my-channel');
channel.bind('my-event', function(data) {
$('#systems').html(data.message);
});
</script>
</head>
<body>
<div id="systems"></div>
</body
</html>
So, in this case regardless of how many clients access the page there is only one database query running, and at each interval all the subscribed clients will be updated with the new data.
There is also an open source server implementation of the Pusher protocol written in Ruby called Slanger.
You can do it with Comet. But you can also just have a JavaScript timer that polls the server every minute or so. That depends on how quickly you want the feedback. Its not necessary to keep the TCP connection open the whole time.
Also, the way you find out about the status of the servers is independent from the way the clients get this information. You would not want to update the servers' status every time a client requests it. Instead you would have a timer on the app server which polls the server status and then stores it. Client requests would be fed from this stored status instead of the actual live status.