I want to gather some information using the visitors of my websites.
What I need is for each visitor to ping 3 different hostnames and then save the following info into a DB.
Visitor IP, latency 1,latency 2, latency 3
Of course everything has to be transparent for the visitor without interrupting him in any way.
Is this possible? Can you give me an example? Are there any plugins for jQuery or something to make it easier
EDIT
This is what I have so far jsfiddle.net/dLVG6 but the data is too random. It jumps from 50 to 190
This is going to be more of a pain that you might think.
Your first problem is that Javascript doesn't have ping. Mostly what Javascript is good at is HTTP and a few cousin protocols.
Second problem is that you can't just issue some ajax requests and time the results (that would be way too obvious). The same origin policy will prevent you from using ajax to talk to servers other than the one the page came from. You'll need to use JSONP, or change the src of an image tag, or something else more indirect.
Your third problem is that you don't want to do anything that will result in a lot of data being returned. You don't want data transfer time or extensive server processing to interfere with measuring latency.
Fourth, you can't ask for URLs that might be cached. If the object happened to be in the cache, you would get really low "latency" measurements but it wouldn't be meaningful.
My solution was to use an image tag with no src attribute. On document load, set the src to point to a valid server but use an invalid port. Generally, it is faster for a server to simply reject your connection than to generate a proper 404 error response. All you have to do then is measure how long it takes to get the error event from the image.
From The Filddle:
var start = new Date().getTime();
$('#junkOne').attr('src', 'http://fate.holmes-cj.com:8886/').error(function () {
var end = new Date().getTime();
$('#timer').html("" + (end - start) + "ms");
});
The technique could probably be improved. Here's some ideas:
Use IP address instead of DNS host name.
Do the "ping" multiple times, throw out the highest and lowest scores, then average the rest.
If your web page has a lot heavy processing going on, try to do the tests when you think the UI load is lightest.
With jQuery you could:
$.ajax(url,settings)(http://api.jquery.com/jQuery.ajax/) and take the time from beforeSend and on complete via Date.now(), subtract those times -> then you have the time for the request (not excactly the "Ping" though)
2021:
Tried this again for a React app I'm building. I don't think the accuracy is too great.
const ping = () => {
var start = new Date().getTime();
api.get('/ping').then((res) => {
console.log(res)
var end = new Date().getTime();
console.log(`${end-start} ms`)
}, (err) => {
console.log(err)
})
};
Wrote my own little API, but I suppose there's just way too much going on during the request.
In terminal, I get about 23ms ping to my server.. using this it shoots up to like 200-500ms.
Related
A web client should only expose some features when a backend API is up and running. Therefor, I'm looking for a clean way to monitor the availability of this backend.
As a quick fix, I made a timer-based function that performs a basic GET on the API root. It's not very clean, generates lots of traffic and pollutes the javascript console with errors (in case of server down).
How should one deal with such situation?
You can trigger something in the lines of this when you need it:
function checkServerStatus()
{
setServerStatus("unknown");
var img = document.body.appendChild(document.createElement("img"));
img.onload = function()
{
setServerStatus("online");
};
img.onerror = function()
{
setServerStatus("offline");
};
img.src = "http://myserver.com/ping.gif";
}
Make ping.gif small (1 pixel) to make it as fast as possible.
Ofc you can do it more smoothly by accessing the API that returns true and keeps a really small response time, but that requires you to do some coding in back-end this simply needs you to place a 1-pixel gif image in a correct directory on a server. You can use any picture already present on the server, but expect more traffic and time as image grows larger.
Now put this in some function that calls it with delay, or simply call this when you need to check status, it's up to you.
If you need a server to send to your app a notification when it's down then you need to implement this:
https://en.wikipedia.org/wiki/Push_technology
Ideally, you would have high-reliability server that has fast response rate and is really reliable to be pinging the desired server in some interval to determine whether it up then use the push to get that information to your app. This way that 3rd server would only send you a push if a status of your app server has changed. Ideally, this server's request has a high priority on your app server queue and servers are well connected and close to each other but not on the same network in case that fails.
Recommendation:
First approach should do you good since it's simple to implement and requires the least amount of knowledge.
Consider second if:
You need a really small interval of checking making your application slower and network traffic higher
You have multiple applications that need the same - making load heavier on both each application, network AND the server. The second approach lets you use single ping to determine truth for all apps.
In order to limit number of request, simple solution can be use of server-sent events. This protocol used on top of HTTP allow server to push multiple updates in response of the same client request.
Client side code (javascript) :
var evtSource = new EventSource("backend.php");
evtSource.onmessage = function(e) {
console.log('status:' + e.data);
}
evtSource.onerror = function(e) {
// add some retry then display error to the user
}
Backend code (PHP, also supported by other languages)
header("Content-Type: text/event-stream\n\n");
while (1) {
// Each 30s, send OK status
echo "OK\n";
ob_flush();
flush();
sleep(30);
}
In both case it will limit number of request (only 1 per "session") but you will have 1 socket per client opened, which can be also to heavy for your server.
If you really want to lower the workload, you should delegate it to external monitoring platform which can expose API to publish backend status.
Maybe it already exists if your backend is hosted on cloud platform.
A node.js project with modules socket.io and express.
Now each client has a canvas, which runs animations on it. When server emit the initiate parameter, the animation can start.
Now the problem is, there is a Time Gap between clients when their animations start. The longer the animation runs, the more obvious the gap would be. The position of the figures would become really different. But what i want is everybody see the same thing on their screen。
Here's how the server deliver the data:
socket.broadcast.emit('init', initData);
socket.emit('init', initData);
The animation function is in the client, it starts when receiving the initiate data from the server.
I'm not sure if it's because the time receiving these data is different in each client.
So how to reduce this gap?
Many thanks.
I think you should try the following: make sure (using onLoad events and collecting that events on server with socket.io) that every clients downloaded animation, and then send signal to start it.
here is a simple formula and routine that works with socket.io, php or anything really.
I used it to fade in a live video stream 10 seconds before it aired. Given the inherent lag and device performance patterns, and wrong time-zones, you can only expect to get about 30ms of precision forward or backward, but to most observers, it all happens "at the same time".
here is a simulation of a server that's about two minutes behind a simulated client (you), and the server wants the client to show a popup in 1.2 seconds:
//the first two vars should come from the server:
var serverWant=+new Date() - 123456 + 1200; // what server time is the event ?
var serverTime=+new Date() - 123456; // what server time is now ?
//the rest happens in your normal event using available info:
var clientTime=+new Date(); // what client time is now ?
var off= clientTime - serverTime; // how far off is the client from the server?
var clientWant= serverWant + off; // what client time is the event ?
var waitFor = clientWant - +new Date(); // how many millis to time out for until event ?
setTimeout(function(){ alert( 'SNAP!' );}, waitFor);
how reliable is this? try changing both "- 123456"s to "+ 12345"s and see if the popup still waits 1.2 seconds to fire, despite not using Math.abs anywhere in the calculation...
in socket.io, you could send the server time and scheduled time to the client for computation in a pre-event:
socket.broadcast.emit('pre-init', {
serverTime: +new Date(),
serverWant: +new Date() + 1200
});
and then use those values and the above math to schedule the animation in a few moments or a few hours as needed, on-demand (yours) to the split second.
You need Dead Reckoning technique in order to simulate client side state as close to real state on server as possible.
You might send state packages to clients periodically, for example every 200ms (5 times a second), and on client side extrapolate from this data.
Additionally to this, you have to remember about different latency for different clients. So as you want to keep same state there is generally two approaches - interpolation (use last known and one before data), or extrapolation (use last known and predict in future based on own latency).
Extrapolation suits better for real-time interactive stuff, but will have problems with error correction - when client will do wrong prediction (object suddenly stopped but based on delay client predicted it still moved).
Interpolation would make everything pretty much delayed and in a past, but will not suffer from errors as there is no predictions. The drawback of this as you need to wait before interpolating amount of time equal to slowest latency user. This means that slower user will force everyone to be slowed down as well.
I'm currently attempting to create a simple video chat service using WebRTC with Ajax for the signalling method.
As per the recommendation of another Stack Overflow user, in order to make sure I was understanding the flow of a standard WebRTC app properly, I first created a simple WebRTC video chat service in which I printed the created offer or answer and ICE candidates out to the screen, and manually copied and pasted that info into a text area in the other client window to process everything. Upon doing that, I was able to successfully get both videos to pop up.
After getting that to work properly, I decided to try and use Ajax as the signalling method. However, I can't seem to get it to work now.
In my current implementation, every time offer/answer or ICE candidate info is created, I instantly create a new Ajax object, which is used to add that info (after the JSON.stringify method has been executed on it) to a DB table. Both clients are constantly polling that DB table, searching for new info from the other client.
I've been echoing a lot of information out to the console, and as far as I can tell, a valid offer is always sent from one client to another, but upon receiving that offer, successfully setting it as the remote description, and creating an answer, any attempts I make to set the local description of the "answerer" fails.
Is there any particular reason why this might happen? Here's a snippet of my code:
var i,
len;
for (i = 0, len = responseData.length; i < len; i += 1) {
message = JSON.parse(responseData[i]);
if (message.type === 'offer') {
makeAnswer(message);
}
// Code omitted,
}
...
makeAnswer = function (offer) {
pc.setRemoteDescription(new RTCSessionDescription(offer), function () {
pc.createAnswer(function (desc) {
// An answer is always properly generated here.
pc.setLocalDescription(desc, function () {
// This success callback function is never executed.
setPayload(JSON.stringify(pc.localDescription));
}, function () {
// I always end up here.
});
});
});
};
In essence, I loop through any data retrieved from the DB (sometimes there's both an offer and lots of candidate info that's gathered all at once), and if the type property of a message is 'offer', I call the makeAnswer function, and from there, I set the remote description to the received offer, create an answer, and try to set the answer to the local description, but it always fails at that last step.
If anyone can offer any advice as to why this might be happening, I would be very appreciative.
Thank you very much.
Well, I figured out the problem. It turns out that I wasn't encoding the SDP and ICE info before sending it to a PHP script via Ajax. As a result, any plus signs (+) in the SDP/ICE info were being turned into spaces, thus causing the strings to differ between the local and remote clients and not work.
I've always used encodeURIComponent on GET requests with Ajax, but I never knew you had to use that function with POST requests as well. That's good to know.
Anyway, after I started using the encodeURIComponent function with the posted data, and then fixed my logic up a bit so that ICE candidates are never set until after both local and remote descriptions are set, it started working like a charm every time.
That's the good news. The bad news is that everything was working fine on my local host, but as soon as I ported the exact same code over to my web-hosted server, even though the console was reporting that the offer/answer and ICE info were all properly being received and set, the remote video isn't popping up.
Sigh. One more hurdle to cross before I can be done with this.
Anyway, just to let everyone know, the key is to use encodeURIComponent before sending the SDP/ICE info to a server-side script, so that the string received on the other end is exactly the same.
I am trying to use periodic refresh(ajax)/polling on my site by XMLHttp(XHR) to check if a user has a new message on the database every 10 seconds, then if there is inform him/her by creating a div dynamically like this:
function shownotice() {
var divnotice = document.createElement("div");
var closelink = document.createElement("a");
closelink.onclick = this.close;
closelink.href = "#";
closelink.className = "close";
closelink.appendChild(document.createTextNode("close"));
divnotice.appendChild(closelink);
divnotice.className = "notifier";
divnotice.setAttribute("align", "center");
document.body.appendChild(divnotice);
divnotice.style.top = document.body.scrollTop + "px";
divnotice.style.left = document.body.scrollLeft + "px";
divnotice.style.display = "block";
request(divnotice);
}
Is this a reliable or stable way to check message specifically since when I look under firebug, a lot of request is going on to my database? Can this method make my database down because of too much request? Is there another way to do this since when I login to facebook and check under firebug, no request is happening or going on but I know they are using periodic refresh too... how do they do that?
You can check for new data every 10 seconds, but instead of checking the db, you need to do a lower impact check.
What I would do is modify the db update process so that when it makes a change to some data, it also updates the timestamp on a file to show that there is a recent change.
If you want better granularity than "something changed somewhere in the db" you can break it down by username (or some other identifier). The file(s) to be updated would then be the username for each user who might be interested in the update.
So, when you script asks the server if there is any information for user X newer than time t, instead of making a DB query, the server side script can just compare the timestamp of a file with the time parameter and see if there is anything new in the database.
In the process that is updating the DB, add code that (roughly) does:
foreach username interested in this update
{
touch the file \updates\username
}
Then your function to see if there is new data looks something like:
function NewDataForUser (string username, time t)
{
timestamp ts = GetLastUpdateTime("\updates\username");
return (ts > t);
}
Once you find that there is new data, you can then do a full blown DB query and get whatever information you need.
I left facebook open with firebug running and I'm seeing requests about once a minute, which seems like plenty to me.
The other approach, used by Comet, is to make a request and leave it open, with the server dribbling out data to the client without completing the response. This is a hack, and violates every principle of what HTTP is all about :). But it does work.
This is quite unreliable and probably far too taxing on the server in most cases.
Perhaps you should have a look into a push interface: http://en.wikipedia.org/wiki/Push_technology
I've heard Comet is the most scalable solution.
I suspect Facebook uses a Flash movie (they always download one called SoundPlayerHater.swf) which they use to do some comms with their servers. This does not get caught by Firebug (might be by Fiddler though).
This is not a better approach. Because you ended up querying your server in every 10 seconds even there is no real updates.
Instead of this polling approach, you can simulate the server push (reverrse AJAX or COMET) approach. This will compeletly reduce the server workload and only the client is updated if there is an update in server side.
As per wikipedia
Reverse Ajax refers to an Ajax design
pattern that uses long-lived HTTP
connections to enable low-latency
communication between a web server and
a browser. Basically it is a way of
sending data from client to server and
a mechanism for pushing server data
back to the browser.
For more info, check out my other response to the similar question
I run a couple of game tunnelling servers and would like to have a page where the client can run a ping on all the servers and find out which is the most responsive. As far as I can see there seems to be no proper way to do this in JavaScript, but I was thinking, does anybody know of a way to do this in flash or some other client browser technology maybe?
Most applet technology, including Javascript, enforces a same-origin policy. It may be possible to dynamically add DOM elements, such as images, and collect timing information using the onload event handler.
Psuedo-code
for (server in servers) {
var img = document.createElement('IMG');
server.startTime = getCurrentTimeInMS();
img.onload=function() { server.endTime = getcurrentTimeInMS(); }
img.src = server.imgUrl;
}
Then wait an appropriate time and check the timing for each server object. Repeat as needed and compute averages if you want. I'm not sure what kind of accuracy you can expect.
Disadvantages:
You are probably using the wrong tool for the job. A browser is not equipped for this sort of application.
It's probably quite inaccurate.
If the resource you request is cached it won't give you the results you want, but you can work around that by changing the url each time.
This is bandwidth-intensive compared to a normal ping. Make the image tiny, such as a spacer.gif file.
The timing depends not only on the latency of the remote server but the bandwidth of that server. This may be a more or less useful measure but it's important to note that it is not simply the latency.
You need to be able to serve HTTP requests from the various servers and, crucially, each server should serve the exact same resource (or a resource of the same length). Conditions on the server can affect the response time, such as if one server is compressing the data and another isn't.
Before the call to the server, record the Javascript time:
var startTime = new Date();
Load an image from the server:
var img = new Image()
img.onload = function() {
// record end time
}
img.src = "http://server1.domain.com/ping.jpg";
As soon as the request is finished, record the time again. (Given of course that the request didn't time out.)
var endTime = new Date();
Your ping in milliseconds is:
var ping = endTime. getTime() - startTime.getTime();
All you really need is the time from the connection start, to the time of the first readystate change...
function getPing() {
var start;
var client = getClient(); // xmlhttprequest object
client.onreadystatechange = function() {
if (client.readyState > 0) {
pingDone(start); //handle ping
client.onreadystatechange = null; //remove handler
}
}
start = new Date();
client.open("HEAD", "/ping.txt"); //static file
client.send();
}
function pingDone(start) {
done = new Date();
ms = done.valueOf() - start.valueOf();
alert(ms + "ms ping time");
}
function getClient() {
if (window.XMLHttpRequest)
return new XMLHttpRequest();
if (window.ActiveXObject)
return new ActiveXObject('MSXML2.XMLHTTP.3.0');
throw("No XMLHttpRequest Object Available.");
}
Here's an <iframe> approach:
(source: magnetiq.com)
Create a table (not necessarily in the literal <table> sense) with two columns. The first column will hold the name of servers (and possibly links to them). The second column has iframes that load probe documents from the respective servers. Each probe document does this on the initial fetch request:
Get current system time
Do a redirect (302) to a second probe document while passing the system time as a query parameter
The second probe document reads the current system time, calculates the delta from the initial reading that was passed to it and just displays it in big fat letters. This delta will be the time it took for the server to respond to the client with a redirect response plus the time it took for the client to make the second request to the redirection target. It's not exactly a "ping" but it's a comparable measure of the client's relative latency with each server. In fact, it's a "reverse ping" from the server to the client.
You'd be using iframes without infringing the same-domain policy because there's no attempt at manipulating the iframe contents at all. The player will simply see the values with his/her own eyes and you'll rely on the user glancing at the numbers and clicking on the server link that makes the most sense.
Anything that makes an HTTP request (like most of the answers here) will generally measure a latency that's at least twice of what you'd see for a normal ping, because you'll need the three way handshake and the termination packet at minimum (two round trips rather than one). If you make HTTP requests, try to keep the headers to a minimum. A long enough header (due to a chatty server, or cookies etc on the client) can add additional round trips into the mix, throwing off your measurements.
As Cherona points out, if you already have an active HTTP 2 connection to the server, or if the server speaks HTTP 3, then this may not be the case.
The most accurate option would be to open a websocket connection to each server and measure the time it takes to send a tiny message and receive a tiny response (after the connection has been established).
If you are talking about running something client side, I am not sure this is possible due to security reasons.
Maybe your best bet would be a java applet - but again this needs to be checked against local security policy.
If I try to think about some hack in JS to do this, maybe you can try to send an async request with a callback function which measures the milliseconds it took - but this is just off the top of my head.
It's not that hard to measure server response time in Flash.
Flash must ask for a policy file before accessing remote servers.
The default location for such policy file is at the root folder of the server: /crossdomain.xml
(You can easily find information about the crossdomain file format)
Since such file is needed anyway, why not use it to measure server response time? Load the file itself instead of an image and measure the time it took using getTimer() .
This will give you a good estimate on HTTP connections.
But if you're dealing with game servers, you might want to directly check the speed of the TCP connection. To do that you'll need to use the flash.net.Socket
You'll also have to ask for a policy file first by running:
Security.loadPolicyFile("xmlsocket://server.domain.com:5342");
Where 5342 represents your server's port number where it should respond with the proper XML policy string.
After making the socket connection, any request/response will let you measure different server response times.
The problem with 'file pings' is that you would evaluate the http server response whereas your target resource for the games you serve may have a very different behavior and thereby a different latency.
Just an idea out of the blue, maybe even unrealistic depending on the actual context:
but, wouldn't it be interesting to make a server script based on a short sequence of tasks typically executed by the servers during the gameplay (e.g. opening a RTMP connection, retrieving an information, sending it back). Depending on the total number of servers, you could almost opening them simultaneously and define the first response as winner (subtracting the time your client requires independently to process each query).
Of course this is a quite expensive method server-side-speaking, but at least you would hopefully get a reliable result (server and network latencies summed up). Even if it takes a couple seconds to evaluate, this would be the matter of a fraction of the total enjoyable game-play.
Based on the responses of #Mr. Shiny and #Georg Schölly , a complete and commented example.
In order to test, just copy and paste the codes below in the same order, in a empty .html, .php or other compatible file.
Before start the get, record the current Javascript time.
Using new Date(), we create a new date object with the current date and time.
<script type="text/javascript">
var startTime = new Date();
Now let's create a html object image, still without source, and attribute it to the variable img.
var img = new Image();
The next spet is put a source in the image. The .src reflects the src html attribute.
Important! Point your img.src to a very small and lightweight image file, if possible anything less than 10KB.
To prevent cache a random parameter was added at the end of file, after the .png extension.
var random_string = Math.random().toString();
img.src = "http://static.bbci.co.uk/frameworks/barlesque/5.0.0/orb/4/img/bbc-blocks-dark.png" + "?" + random_string;
Now we may call our function which will run just when the image loads, because the .onload:
img.onload = function() {
var endTime = new Date();
var ping = endTime. getTime() - startTime.getTime();
alert(img.src + " loaded in " + ping + " ms");
}
</script>
Inside the function we have the variable endTime that receives a date time after the source image was loaded.
Lastly, the ping variable receives the initial time minus the final time.
The alert popup shows the result.