I am trying to use periodic refresh(ajax)/polling on my site by XMLHttp(XHR) to check if a user has a new message on the database every 10 seconds, then if there is inform him/her by creating a div dynamically like this:
function shownotice() {
var divnotice = document.createElement("div");
var closelink = document.createElement("a");
closelink.onclick = this.close;
closelink.href = "#";
closelink.className = "close";
closelink.appendChild(document.createTextNode("close"));
divnotice.appendChild(closelink);
divnotice.className = "notifier";
divnotice.setAttribute("align", "center");
document.body.appendChild(divnotice);
divnotice.style.top = document.body.scrollTop + "px";
divnotice.style.left = document.body.scrollLeft + "px";
divnotice.style.display = "block";
request(divnotice);
}
Is this a reliable or stable way to check message specifically since when I look under firebug, a lot of request is going on to my database? Can this method make my database down because of too much request? Is there another way to do this since when I login to facebook and check under firebug, no request is happening or going on but I know they are using periodic refresh too... how do they do that?
You can check for new data every 10 seconds, but instead of checking the db, you need to do a lower impact check.
What I would do is modify the db update process so that when it makes a change to some data, it also updates the timestamp on a file to show that there is a recent change.
If you want better granularity than "something changed somewhere in the db" you can break it down by username (or some other identifier). The file(s) to be updated would then be the username for each user who might be interested in the update.
So, when you script asks the server if there is any information for user X newer than time t, instead of making a DB query, the server side script can just compare the timestamp of a file with the time parameter and see if there is anything new in the database.
In the process that is updating the DB, add code that (roughly) does:
foreach username interested in this update
{
touch the file \updates\username
}
Then your function to see if there is new data looks something like:
function NewDataForUser (string username, time t)
{
timestamp ts = GetLastUpdateTime("\updates\username");
return (ts > t);
}
Once you find that there is new data, you can then do a full blown DB query and get whatever information you need.
I left facebook open with firebug running and I'm seeing requests about once a minute, which seems like plenty to me.
The other approach, used by Comet, is to make a request and leave it open, with the server dribbling out data to the client without completing the response. This is a hack, and violates every principle of what HTTP is all about :). But it does work.
This is quite unreliable and probably far too taxing on the server in most cases.
Perhaps you should have a look into a push interface: http://en.wikipedia.org/wiki/Push_technology
I've heard Comet is the most scalable solution.
I suspect Facebook uses a Flash movie (they always download one called SoundPlayerHater.swf) which they use to do some comms with their servers. This does not get caught by Firebug (might be by Fiddler though).
This is not a better approach. Because you ended up querying your server in every 10 seconds even there is no real updates.
Instead of this polling approach, you can simulate the server push (reverrse AJAX or COMET) approach. This will compeletly reduce the server workload and only the client is updated if there is an update in server side.
As per wikipedia
Reverse Ajax refers to an Ajax design
pattern that uses long-lived HTTP
connections to enable low-latency
communication between a web server and
a browser. Basically it is a way of
sending data from client to server and
a mechanism for pushing server data
back to the browser.
For more info, check out my other response to the similar question
Related
I have a piece of open source software written in python which uses the bottle web server to display forms in a web browser. The form data are send via "method = post" to the web server. Until now the server process is running on the same (PC) host as the browser, so there is no issue with the internet connection.
Now I have to rewrite this software so that it can be used on mobile devices, with the server somewhere in the internet. The environment in which data entry is to take place will be such that an unstable or lost internet connection is likely. So I have to have provisions for the case that the website containing the form is loaded first (in the office via WLAN, say), then data entry takes place (in the "field") and during data entry, internet connection is lost, so that saving data to the server won't work. In this case it would be great to be able to save the form data locally, in order to send the post-request later on. (Probably it won't be possible to keep the website open all the time until this is possible. The latest when battery goes low, I'd run into problems.)
Probably I'm not the first with this problem, so my question is: is there a "standard" (or well tested) solution for the task to buffer form data on the client side for the case when a post-request is not answered, and send the same request later on? If not, how would you go about to solve this issue? In particular, I see the following (sub-)problems:
How to detect (on the client side) that a post request failed? Probably some kind of timeout mechanism in javascript would have to be employed, but how?
How to save data? My first idea would be to save data to a cookie using javascript. Do I overlook something here?
How to send data back later on?
I'm sufficiently proficient in python to dare this project, but rather new to web technologies, so please excuse if some part of the question is rather stupid. In this case, I'd be grateful to be told so... (... with a hint on how to ask a better question.)
Thanks a lot for any help.
I will try to answer based on (sub-)problems:
How to detect (on the client side) that a post request failed? Probably some kind of timeout mechanism in javascript would have to be employed, but how?
To detect if request failed
Only send status code 200 if you received data and it's saved to backend!
Don't send 200 if there is an error! (use error status code like 5xx or 4xx)
There is a timeout option in jquery to cancel the request if it takes more than given time to complete
When failed, Save data to localStorage
If you are not using jquery, I guess you can do something similar using fetch in vanilla javascript (Click here to know more about fetch)
$.ajax({
timeout: 3000 // sets timeout to 3 seconds
}).done(function () {
console.log("success");
}).fail(function () {
console.log("error");
var _local = localStorage.getItem('data-saved'); //get localStorage data
_local.push({"key": "value"}) // Append JSON based Form data
localStorage.setItem('data-saved', JSON.stringify(_local)); // Update localStorage
});
How to save data? My first idea would be to save data to a cookie using javascript. Do I overlook something here?
Save data using localStorage
In LocalStorage, you can't store JSON however, you can save using JSON.stringify and load back using JSON.parse
// Get data
var get_local_data = JSON.parse(localStorage.getItem('data-saved'));
// Update Data
get_local_data.append({"Name": "value", "age": 10})
// Update localStorage
localStorage.setItem('data-saved', JSON.stringify(get_local_data));
How to send data back later on?
Sending data back using setTimeout method in javascript
Check continuously if there is any data in localStorage's key. If any send an ajax request to back-end!
// Run in each 5 Sec
setTimeout(function () {
// Check if we have any failed data
var get_local_data = JSON.parse(localStorage.getItem('data-saved'));
if(get_local_data.length > 0){
//Make a ajax request
//Update localStorage if success (You need to remove the data from the localStorage),
//Ignored failed case
}
}, 5000);
A web client should only expose some features when a backend API is up and running. Therefor, I'm looking for a clean way to monitor the availability of this backend.
As a quick fix, I made a timer-based function that performs a basic GET on the API root. It's not very clean, generates lots of traffic and pollutes the javascript console with errors (in case of server down).
How should one deal with such situation?
You can trigger something in the lines of this when you need it:
function checkServerStatus()
{
setServerStatus("unknown");
var img = document.body.appendChild(document.createElement("img"));
img.onload = function()
{
setServerStatus("online");
};
img.onerror = function()
{
setServerStatus("offline");
};
img.src = "http://myserver.com/ping.gif";
}
Make ping.gif small (1 pixel) to make it as fast as possible.
Ofc you can do it more smoothly by accessing the API that returns true and keeps a really small response time, but that requires you to do some coding in back-end this simply needs you to place a 1-pixel gif image in a correct directory on a server. You can use any picture already present on the server, but expect more traffic and time as image grows larger.
Now put this in some function that calls it with delay, or simply call this when you need to check status, it's up to you.
If you need a server to send to your app a notification when it's down then you need to implement this:
https://en.wikipedia.org/wiki/Push_technology
Ideally, you would have high-reliability server that has fast response rate and is really reliable to be pinging the desired server in some interval to determine whether it up then use the push to get that information to your app. This way that 3rd server would only send you a push if a status of your app server has changed. Ideally, this server's request has a high priority on your app server queue and servers are well connected and close to each other but not on the same network in case that fails.
Recommendation:
First approach should do you good since it's simple to implement and requires the least amount of knowledge.
Consider second if:
You need a really small interval of checking making your application slower and network traffic higher
You have multiple applications that need the same - making load heavier on both each application, network AND the server. The second approach lets you use single ping to determine truth for all apps.
In order to limit number of request, simple solution can be use of server-sent events. This protocol used on top of HTTP allow server to push multiple updates in response of the same client request.
Client side code (javascript) :
var evtSource = new EventSource("backend.php");
evtSource.onmessage = function(e) {
console.log('status:' + e.data);
}
evtSource.onerror = function(e) {
// add some retry then display error to the user
}
Backend code (PHP, also supported by other languages)
header("Content-Type: text/event-stream\n\n");
while (1) {
// Each 30s, send OK status
echo "OK\n";
ob_flush();
flush();
sleep(30);
}
In both case it will limit number of request (only 1 per "session") but you will have 1 socket per client opened, which can be also to heavy for your server.
If you really want to lower the workload, you should delegate it to external monitoring platform which can expose API to publish backend status.
Maybe it already exists if your backend is hosted on cloud platform.
I am building a simple support chat for my website using Ajax. I would like to check if the user that I am currently chatting with left the browser.
At the moment I have build in that feature by setting interval function at customer side that creates the file with name: userId.txt
In the admin area I have created an interval function that checks if userId.txt exists. If it exists, it deletes it. If the file is not recreated by the custom interval function - next time the admin function will find out that file is not there it mark customer with this userId as inactive.
Abstract representation:
customer -> interval Ajax function -> php [if no file - create a new file]
admin -> interval Ajax function -> php [if file exists - delete the file] -> return state to Ajax function and do something
I was wondering if there is any better way to implement this feature that you can think of?
My solution is to use the jquery ready and beforeunload methods to trigger an ajax post request that will notify when the user arrives and leaves.
This solution is "light" because it only logs twice per user.
support.html
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<script>
//log user that just arrived - Page loaded
$(document).ready(function() {
$.ajax({
type: 'POST',
url: 'log.php',
async:false,
data: {userlog:"userid arrived"}
});
});
//log user that is about to leave - window/tab will be closed.
$(window).bind('beforeunload', function(){
$.ajax({
type: 'POST',
url: 'log.php',
async:false,
data: {userlog:"userid left"}
});
});
</script>
</head>
<body>
<h2>Your support html code...</h2>
</body>
</html>
log.php
<?php
//code this script in a way that you get notified in real time
//in this case, I just log to a txt file
$userLog = $_POST['userlog'];
file_put_contents("userlog.txt", $userLog."\n", FILE_APPEND );
//userid arrived
//userid left
Notes:
1 - Tested on Chrome, FF and Opera. I don't have a mac so I couldn't test it on Safari but it should work too.
2 - I've tried the unload method but it wasn't as reliable as beforeunload.
3 - Setting async to false on the ajax request means that the statement you are calling has to complete before the next statement, this ensures that you'll get notified before the window/tab is closed.
#Gonzalon makes a good point but using a normal DB table or the filesystem for constantly updating user movement would be exhaustive to most hard disks. This would be a good reason for using shared memory functions in PHP.
You have to differentiate a bit between the original question "How do i check in real-time, if a user is logged in?" and "How can i make sure, if a user is still on the other side (in my chat)?".
For a "login system" i would suggest to work with PHP sessions.
For the "is user still there" question, i would suggest to update one field of the active session named LAST_ACTIVITY. It is necessary to write a timestamp with the last contact with the client into a store (database) and test whether that is older than X seconds.
I'm suggesting sessions, because you have not mentioned them in your question and it looks like you are creating the userID.txt file manually on each Ajax request, right? Thats not needed, unless working cookie and session-less is a development requirement.
Now, for the PHP sessions i would simply change the session handler (backend) to whatever scales for you and what makes requesting information easy.
By default PHP uses the session temp folder to create session files,
but you might change it, so that the underlying session handler becomes a mariadb database or memcache or rediska.
When the users sessions are stored into a database you can query them: "How many users are now logged in?", "Who is where?".
The answer for "How can I check in real time if a user is logged in?" is, when the user session is created and the user is successfully authenticated.
For real-time chat application there are a lot of technologies out there, from "php comet", "html5 eventsource" + "websockets" / "long polling" to "message queues", like RabbitMq/ActiveMq with publish/subscribe to specific channels.
If this is a simple or restricted environment, maybe a VPS, then you can still stick to your solution of intervalic Ajax requests. Each request might then update $_SESSION['LAST_ACTIVITY'] with a server-side timestamp. Referencing: https://stackoverflow.com/a/1270960/1163786
A modification to this idea would be to stop doing Ajax requests, when the mouse movement stops. If the user doesn't move the mouse on your page for say 10 minutes, you would stop updating the LAST_ACTIVITY timestamp. This would fix the problem of showing users who are idle as being online.
Another modification is to reduce the size of the "iam still here" REQUEST to the server by using small GET or HEADER requests. A short HEADER "ping" is often enough, instead of sending long messages or JSON via POST.
You might find a complete "How to create an Ajax Web Chat with PHP, jQuery" over here. They use a timeout of 15 seconds for the chat.
Part 1 http://tutorialzine.com/2010/10/ajax-web-chat-php-mysql/
Part 2 http://tutorialzine.com/2010/10/ajax-web-chat-css-jquery/
You can do it this way, but it'll be slow, inefficient, and probably highly insecure. Using a database would be a noticeable improvement, but even that wouldn't be particularly scalable, depending on how "real-time" you want this to be and how many conversations you want it to be able to handle simultaneously.
You'd be much better off using a NoSQL solution such as Redis for any actions that you'll need to run frequently (ie: "is user online" checks, storing short-term conversation updates, and checking for conversation updates at short intervals).
Then you'd use the database for more long-term tasks like storing user information and saving active conversations at regular intervals (maybe once per minute, for example).
Why Ajax and not Websockets? Surely a websocket would give you a considerably faster chat system, wouldn't require generating and checking a text file, would not involve a database lookup and you can tell instantly if the connection is dropped.
I would install the https://github.com/nrk/predis library. So at the time the user authenticates, It publishes a message to Redis server.
Then you can set-up a little node server on the back-end - something simple like:
var server = require('http').Server();
var io = require('socket.io')(server);
var Redis = require('ioredis');
var redis = new Redis();
var authenticatedUsers = [];
// Subscribe to the authenticatedUsers channel in Redis
redis.subscribe('authenticatedUsers');
// Logic for what to do when a message is received from Redis
redis.on('message', function(channel, message) {
authenticatedUsers.push(message);
io.emit('userAuthenticated', message);
});
// What happens when a client connects
io.on('connection', function(socket) {
console.log('connection', socket.id);
socket.on('disconnect', function(a) {
console.log('user disconnected', a);
});
});
server.listen(3000);
Far from complete, but something to get you started.
Alternatively, take a look at Firebase. https://www.firebase.com/ if you dont want to bother with the server-side
I would suggest using in built HTML5 session storage for this purpose. This is supported by all modern browsers so we will not face issues for the same.
This will help us to be efficient and quick to recognize if user is online. Whenever user moves mouse or presses keys update session storage with date and time. Check it periodically to see if it is empty or null and decide user left the site.
Depending on your resources you may opt for websockets or the previous method called long pool request. Both ensure a bidirectional communication between the server and the client. But they may be expensive on resources.
Here is an good tutorial on the websocket:
http://www.binarytides.com/websockets-php-tutorial/
I would use a callback that you (admin) can trigger. I use this technique in web app and mobile apps to (All this is set on the user side from the server):
Send a message to user (like: "behave or I ban you").
Update user status/location. (for events to know when attendants is arriving)
Terminate user connections (e.g. force log out if maintenance).
Set user report time (e.g. how often should the user report back)
The callback for the web app is usually in JavaScript, and you define when and how you want the user to call home. Think of it as a service channel.
Instead of creating and deleting files you can do the same thing with cookie benefits of using cookie are
You do not need to hit ajax request to create a file on server as cookies are accessible by javascript/jquery.
Cookies have an option to set the time interval so would automatically delete themselves after a time, so you will not need php script to delete that.
Cookies are accessible by php, so when ever you need to check if user is still active or not, you can simply check if the cookie exist
If it were aspnet I would say signalR... but for php perhaps you could look into Rachet it might help with a lot of what you are trying to accomplish as the messages could be pushed to the clients instead of client polling.
Imo, there is no need for setting up solutions with bidirectional communications. You only want to know if a user is still logged in or attached to the system. If I understand you right, you only need a communication from server to client. So you can try SSE (server sent events) for that. The link gives you an idea, how to implement this with PHP.
The idea is simple. The server knows if user is attached or not. He could send something like "hey, user xyz is still logged in" or "hey, user xzy seems not to be logged in any more" and the client only listens to that messages and can react to the messages (e.g. via JavaScript).
The advantage is: SSE is really good for realtime applications, because the server only has to send data and the client has only to listen, see also the specification for this.
If you really need bidirectional communications or can't go with the two dependencies mentioned in the specs, it's not the best decision to use SSE, of course.
Here is a late Update with a nice chat example (written in Java). Probably it's also good to get an idea how to implement this in PHP.
I have a small application where a users can drag and drop a task in an HTML table.
When user drops the task, I call a javascript function called update_task:
function update_task(user_id, task_id, status_id, text, uiDraggable, el) {
$.get('task_update.php?user_id='+user_id+'&task_id='+task_id+'&status_id='+status_id+'', function(data) {
try {
jsonResult = JSON.parse(data);
} catch (e) {
alert(data);
return;
};
In task_update.php I GET my values; user_id, task_id & status_id and execute a PDO UPDATE query, to update my DB. If the query executes correctly, I
echo json_encode ( array (
'success' => true
) );
And then I append the task to the correct table cell
if(typeof jsonResult.success != 'undefined') {
$(uiDraggable).detach().css({top: 0,left: 0}).appendTo(el);
}
This has all worked fine. But, I'm starting to realize, that it's a problem when 2 or more people are making changes at the same time. If I'm testing with 2 browsers, and has the site opened on both for example: Then, if I move a task on browser1, I would have to manually refresh the page at browser2 to see the changes.
So my question is; How can I make my application auto-detech if a change to the DB-table has been made? And how can I update the HTML table, without refreshing the page.
I have looked at some timed intervals for updating pages, but that wouldn't work for me, since I really don't want to force the browser to refresh. A user can for example also create a new task in a lightbox iframe, so it would be incredibly annoying for them, if their browser refreshed while they were trying to create a new task.
So yeah, what would be the best practice for me to use?
Use Redis and its publish/subscribe feature to implement a message bus between your PHP app and a lightweight websocket server (Node.js is a good choice for this).
When your PHP modifies the data, it also emits an event in Redis that some data has changed.
When a websocket client connects to the Node.js server, it tells the server what data it would like to monitor, then, as soon as a Redis event was received and the event's data matches the client's monitored data, notify the client over the websocket, which then would refresh the page.
Take a look at this question with answers explaining all of this in detail, includes sample code that you can reuse.
I would use ajax to check the server at a reasonable interval. What's reasonable depends on your project - it should be often enough that it changes on one end don't mess up what another user is doing.
If you're worried about this being resource intensive you could use APC to save last modified times for everything that's active - that way you don't have to hit the database when you're just checking if anything has changed.
When things have changed then you should use ajax for that as well, and add the changes directly in the page with javascript/jquery.
If you really need to check a db changes - write a database triggers.
But if nobody, except your code, change it - you can to implement some observation in your code.
Make Observation(EventListener) pattern imlementation, or use one of existed.
Trigger events when anything meaningful happened.
Subscribe to this events
I want to gather some information using the visitors of my websites.
What I need is for each visitor to ping 3 different hostnames and then save the following info into a DB.
Visitor IP, latency 1,latency 2, latency 3
Of course everything has to be transparent for the visitor without interrupting him in any way.
Is this possible? Can you give me an example? Are there any plugins for jQuery or something to make it easier
EDIT
This is what I have so far jsfiddle.net/dLVG6 but the data is too random. It jumps from 50 to 190
This is going to be more of a pain that you might think.
Your first problem is that Javascript doesn't have ping. Mostly what Javascript is good at is HTTP and a few cousin protocols.
Second problem is that you can't just issue some ajax requests and time the results (that would be way too obvious). The same origin policy will prevent you from using ajax to talk to servers other than the one the page came from. You'll need to use JSONP, or change the src of an image tag, or something else more indirect.
Your third problem is that you don't want to do anything that will result in a lot of data being returned. You don't want data transfer time or extensive server processing to interfere with measuring latency.
Fourth, you can't ask for URLs that might be cached. If the object happened to be in the cache, you would get really low "latency" measurements but it wouldn't be meaningful.
My solution was to use an image tag with no src attribute. On document load, set the src to point to a valid server but use an invalid port. Generally, it is faster for a server to simply reject your connection than to generate a proper 404 error response. All you have to do then is measure how long it takes to get the error event from the image.
From The Filddle:
var start = new Date().getTime();
$('#junkOne').attr('src', 'http://fate.holmes-cj.com:8886/').error(function () {
var end = new Date().getTime();
$('#timer').html("" + (end - start) + "ms");
});
The technique could probably be improved. Here's some ideas:
Use IP address instead of DNS host name.
Do the "ping" multiple times, throw out the highest and lowest scores, then average the rest.
If your web page has a lot heavy processing going on, try to do the tests when you think the UI load is lightest.
With jQuery you could:
$.ajax(url,settings)(http://api.jquery.com/jQuery.ajax/) and take the time from beforeSend and on complete via Date.now(), subtract those times -> then you have the time for the request (not excactly the "Ping" though)
2021:
Tried this again for a React app I'm building. I don't think the accuracy is too great.
const ping = () => {
var start = new Date().getTime();
api.get('/ping').then((res) => {
console.log(res)
var end = new Date().getTime();
console.log(`${end-start} ms`)
}, (err) => {
console.log(err)
})
};
Wrote my own little API, but I suppose there's just way too much going on during the request.
In terminal, I get about 23ms ping to my server.. using this it shoots up to like 200-500ms.