How to reduce time difference between clients receiving data in socket.io? - javascript

A node.js project with modules socket.io and express.
Now each client has a canvas, which runs animations on it. When server emit the initiate parameter, the animation can start.
Now the problem is, there is a Time Gap between clients when their animations start. The longer the animation runs, the more obvious the gap would be. The position of the figures would become really different. But what i want is everybody see the same thing on their screen。
Here's how the server deliver the data:
socket.broadcast.emit('init', initData);
socket.emit('init', initData);
The animation function is in the client, it starts when receiving the initiate data from the server.
I'm not sure if it's because the time receiving these data is different in each client.
So how to reduce this gap?
Many thanks.

I think you should try the following: make sure (using onLoad events and collecting that events on server with socket.io) that every clients downloaded animation, and then send signal to start it.

here is a simple formula and routine that works with socket.io, php or anything really.
I used it to fade in a live video stream 10 seconds before it aired. Given the inherent lag and device performance patterns, and wrong time-zones, you can only expect to get about 30ms of precision forward or backward, but to most observers, it all happens "at the same time".
here is a simulation of a server that's about two minutes behind a simulated client (you), and the server wants the client to show a popup in 1.2 seconds:
//the first two vars should come from the server:
var serverWant=+new Date() - 123456 + 1200; // what server time is the event ?
var serverTime=+new Date() - 123456; // what server time is now ?
//the rest happens in your normal event using available info:
var clientTime=+new Date(); // what client time is now ?
var off= clientTime - serverTime; // how far off is the client from the server?
var clientWant= serverWant + off; // what client time is the event ?
var waitFor = clientWant - +new Date(); // how many millis to time out for until event ?
setTimeout(function(){ alert( 'SNAP!' );}, waitFor);
how reliable is this? try changing both "- 123456"s to "+ 12345"s and see if the popup still waits 1.2 seconds to fire, despite not using Math.abs anywhere in the calculation...
in socket.io, you could send the server time and scheduled time to the client for computation in a pre-event:
socket.broadcast.emit('pre-init', {
serverTime: +new Date(),
serverWant: +new Date() + 1200
});
and then use those values and the above math to schedule the animation in a few moments or a few hours as needed, on-demand (yours) to the split second.

You need Dead Reckoning technique in order to simulate client side state as close to real state on server as possible.
You might send state packages to clients periodically, for example every 200ms (5 times a second), and on client side extrapolate from this data.
Additionally to this, you have to remember about different latency for different clients. So as you want to keep same state there is generally two approaches - interpolation (use last known and one before data), or extrapolation (use last known and predict in future based on own latency).
Extrapolation suits better for real-time interactive stuff, but will have problems with error correction - when client will do wrong prediction (object suddenly stopped but based on delay client predicted it still moved).
Interpolation would make everything pretty much delayed and in a past, but will not suffer from errors as there is no predictions. The drawback of this as you need to wait before interpolating amount of time equal to slowest latency user. This means that slower user will force everyone to be slowed down as well.

Related

Realtime clock synch across browser clients

I am writing a collaborative music web-app that can be "played" in parallel across browsers in different devices.
Imagine two people standing in the same room, and playing their "instruments" together. Each one holding a different device running a webapp in their hands.
I need both devices to have a synched timer where they agree on what time it is +/- several milliseconds.
My first attempt was simply to trust that my 2 devices (windows PC, and android phone) are synched. But they have several seconds between their clocks. So I realize that I need to implement this myself.
Is there a REST/Websocket service that I can use to periodically synchornize the apps' time?
If not, is there a standard algorithm that would be effective to implement over websocket?
My naiive instinct is to implement a 4 way ping between the client and server, and half their ping/pong time, but I am pretty sure that someone has allready implemented something better.
As we are talking about music, standard multiplayer time-drifts won't cut it. I need the clocks to be in synch in greater accuracy than network ping time.
Is there something like NTP that works in a browser?
This works mostly ok for now.
Some hicups if browser execution is delayed on busy machine.
I echo the requests back to the client with the addition of servertime from the server.
function syncTime(requestTimeMs, serverTimeMs) {
var now = Date.now();
var latency = (now - requestTimeMs) / 2;
var serverTimeWithLatency = serverTimeMs + latency;
timeOffsetFromServer = parseInt(serverTimeWithLatency - now);
}
function getSynchedTimeMs() {
return Date.now() + timeOffsetFromServer
}

JavaScript unhackable countdown timer

I'm currently working on a quizz app. When a question appears the users has 10 seconds to answer it, otherwise they don't get the points for that question. Once the timer is up, I want to autmatically move to the next question. I am currently facing issues on how to make the 10 second countdown timer "unhackable" by the client.
My initial idea was to use something along the lines of setTimeout() on the client-side for 10 seconds, and once the timer is complete, ask the server to fetch the next question. The problem with this is that the client-side timer can be hacked/modified to run for longer than 10 seconds, potentionally giving some users longer than 10 seconds to answer the question.
client <--- sends question --- server
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server
In order to keep it unhackable, I thought of moving the time checking logic to the sever-side. This would involve keeping two variables (A and B) on the server-side per connected user, one representing the time the question was sent, and the other representing the time an answer was given. The client-side timer would still run, except the server side uses the time-stamps to perform some validation to check if the difference between the timestamps A and B exceeds 10 seconds:
client <--- sends question --- server (send question at timestamp `A`)
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server (receive request at timestamp `B`)
|
+-----------------------------------------------------+
v
server logic:
duration = B - A
if(duration > 10 seconds) {
// allocated time exceeded
}
However, I see a few potentional flaws with this. The time it takes for the question to arrive to the client from the server and the time between when the server sent the questioon (time A) to the time that the client-side timer starts won't be instentationous and will depend on the ping / connection that the user has to the server. Similar ping issues exist when the client asks for the next question. Moreover, I'm worried that if the client-side timer which is supposed to run for 10 seconds lags behind a little, then it would also cause the server-side check to fail. As a result, checking if the duration exceeded 10 seconds isn't enough, and it would require some additional buffer. However, I feel like arbitarly hard-coding the buffer to something like 1 or 2 seconds could potentionally still lead to issues and feels like a bit of a hacky work-around that isn't very robust.
Question: I'm wondering if there is a different approach that I am missing to keep the client side timer unhackable and accurate. I also want to try and avoid creating seperate timers with setTimeout() or alike for each connected user on the server-side, as many users could be connected at one given point in time, and having so many timers queued up on the server feels unresourceful. I also want to try and keep the number of messages sent back and forth between the client and the server to a minimum.
What about a cookie?
Set a cookie with a unique token. Set its expiration to now()+15 seconds. Save the token and time on server side. Keep your client-side timer running with an auto submit after 10 seconds.
When the answer comes in, if there is no cookie... It certainly means the answer was sent after the delay (and the timer was hacked).
So a cookie expiration time of now() + 10 seconds + a grace period of ~5 additionnal seconds is supposed to be way enought to compensate the HTTP delays.
If they hack the timer, the cookie should have expired (and deleted). If they also hack the cookie expiration(!), anyway the token will be used to retreive the question sent datetime and you will compare it with the answer received datetime.
Instead of starting the clock on the server when the question is sent. You could start the clock on the server when the question is shows to the user (on the client).
Maintain 2 clocks one on the client other on the server.
Timestamp every time sensitive request (Start quiz timer and End quiz timer and check if the timestamp discrepancy is within acceptable tolerance.

How to get the latency time for a one way trip from client to server

I was wondering if there was a way to get the time (in ms) between the client sending a message to the server and the server receiving that message
I cannot compare the time in milliseconds with the server and client using Date.now() because every device might be off by a few seconds.
I can find the time for a two way trip, logging the time when I send a message and logging the time again when I receive a message in return from the server. However, The time it takes for a message to get from the client to the server may not be the same as it is for the message to get from the server to the client on a two way trip. So I cant just simply divide this time by 2.
Any suggestions on how I can find this time or at least the difference between Date.now() on the client and the server?
Thanks in advance.
You can achieve this if you first synchronize the clocks of both your server and client using NTP. This requires access to an external server, however you can configure NTP to be installed on your server as well (see ntpd)
There are several modules that implement NTP in node: node-ntp-client or sntp
Here's an example with node-ntp-client:
var ntpClient = require('ntp-client');
var clientOffset = 0;
ntpClient.getNetworkTime("pool.ntp.org", 123, function(err, date) {
if(err) {
console.error(err);
return;
}
clientOffset = Date.now() - date;
});
When sending data to the server, send the timestamp as well:
var clientTimestamp = Date.now() - clientOffset
Server would have its own offset. When receiving the package, it can calculate latency using:
var latency = Date.now() - serverOffset - clientTimestamp;
I was wondering if there was a way to get the time (in ms) between the client sending a message to the server and the server receiving that message
No, there is not. At least, not without a common clock reference.
If I were to mail you a letter, you know what day you received the letter on but you don't know when it was sent. Therefore, you have no idea how long it took the post office to route and deliver the letter to you.
One possible solution is for me to date the letter. When you receive it, you can compare the received date to the date I sent it and determine how many days it was in transit. However, what if I wrote down the wrong date? Suppose I thought it was Friday when it was really Wednesday. Then, you can't accurately determine when it was sent.
Changing this scale back to computers, we'll have to use our realtime clock (RTC) to timestamp the packet we send. Even with reasonable accuracy, our RTCs might be set a minute off from each other. I could send you a packet at 01:23:01.000Z my time, and you might receive it 10 milliseconds later... at 01:23:55.00Z your time and calculate that it took 54 seconds to reach you!
Even if you synchronize with NTP, over the internet, that's potentially 10s to 100s of milliseconds off.
The way very accurate clock synchronization is usually done is via GPS receivers, which by their nature serve as an extremely accurate clock source. If you and I were both very accurately sychronized to GPS receivers, I could send you a packet and you could calculate how long it took.
This is generally impractical, which is why when we ping stuff, we use round-trip time.

How to test latency from user to server in browser using javascript?

I want to gather some information using the visitors of my websites.
What I need is for each visitor to ping 3 different hostnames and then save the following info into a DB.
Visitor IP, latency 1,latency 2, latency 3
Of course everything has to be transparent for the visitor without interrupting him in any way.
Is this possible? Can you give me an example? Are there any plugins for jQuery or something to make it easier
EDIT
This is what I have so far jsfiddle.net/dLVG6 but the data is too random. It jumps from 50 to 190
This is going to be more of a pain that you might think.
Your first problem is that Javascript doesn't have ping. Mostly what Javascript is good at is HTTP and a few cousin protocols.
Second problem is that you can't just issue some ajax requests and time the results (that would be way too obvious). The same origin policy will prevent you from using ajax to talk to servers other than the one the page came from. You'll need to use JSONP, or change the src of an image tag, or something else more indirect.
Your third problem is that you don't want to do anything that will result in a lot of data being returned. You don't want data transfer time or extensive server processing to interfere with measuring latency.
Fourth, you can't ask for URLs that might be cached. If the object happened to be in the cache, you would get really low "latency" measurements but it wouldn't be meaningful.
My solution was to use an image tag with no src attribute. On document load, set the src to point to a valid server but use an invalid port. Generally, it is faster for a server to simply reject your connection than to generate a proper 404 error response. All you have to do then is measure how long it takes to get the error event from the image.
From The Filddle:
var start = new Date().getTime();
$('#junkOne').attr('src', 'http://fate.holmes-cj.com:8886/').error(function () {
var end = new Date().getTime();
$('#timer').html("" + (end - start) + "ms");
});
The technique could probably be improved. Here's some ideas:
Use IP address instead of DNS host name.
Do the "ping" multiple times, throw out the highest and lowest scores, then average the rest.
If your web page has a lot heavy processing going on, try to do the tests when you think the UI load is lightest.
With jQuery you could:
$.ajax(url,settings)(http://api.jquery.com/jQuery.ajax/) and take the time from beforeSend and on complete via Date.now(), subtract those times -> then you have the time for the request (not excactly the "Ping" though)
2021:
Tried this again for a React app I'm building. I don't think the accuracy is too great.
const ping = () => {
var start = new Date().getTime();
api.get('/ping').then((res) => {
console.log(res)
var end = new Date().getTime();
console.log(`${end-start} ms`)
}, (err) => {
console.log(err)
})
};
Wrote my own little API, but I suppose there's just way too much going on during the request.
In terminal, I get about 23ms ping to my server.. using this it shoots up to like 200-500ms.

How to determine latency of a remote server through the browser

I run a couple of game tunnelling servers and would like to have a page where the client can run a ping on all the servers and find out which is the most responsive. As far as I can see there seems to be no proper way to do this in JavaScript, but I was thinking, does anybody know of a way to do this in flash or some other client browser technology maybe?
Most applet technology, including Javascript, enforces a same-origin policy. It may be possible to dynamically add DOM elements, such as images, and collect timing information using the onload event handler.
Psuedo-code
for (server in servers) {
var img = document.createElement('IMG');
server.startTime = getCurrentTimeInMS();
img.onload=function() { server.endTime = getcurrentTimeInMS(); }
img.src = server.imgUrl;
}
Then wait an appropriate time and check the timing for each server object. Repeat as needed and compute averages if you want. I'm not sure what kind of accuracy you can expect.
Disadvantages:
You are probably using the wrong tool for the job. A browser is not equipped for this sort of application.
It's probably quite inaccurate.
If the resource you request is cached it won't give you the results you want, but you can work around that by changing the url each time.
This is bandwidth-intensive compared to a normal ping. Make the image tiny, such as a spacer.gif file.
The timing depends not only on the latency of the remote server but the bandwidth of that server. This may be a more or less useful measure but it's important to note that it is not simply the latency.
You need to be able to serve HTTP requests from the various servers and, crucially, each server should serve the exact same resource (or a resource of the same length). Conditions on the server can affect the response time, such as if one server is compressing the data and another isn't.
Before the call to the server, record the Javascript time:
var startTime = new Date();
Load an image from the server:
var img = new Image()
img.onload = function() {
// record end time
}
img.src = "http://server1.domain.com/ping.jpg";
As soon as the request is finished, record the time again. (Given of course that the request didn't time out.)
var endTime = new Date();
Your ping in milliseconds is:
var ping = endTime. getTime() - startTime.getTime();
All you really need is the time from the connection start, to the time of the first readystate change...
function getPing() {
var start;
var client = getClient(); // xmlhttprequest object
client.onreadystatechange = function() {
if (client.readyState > 0) {
pingDone(start); //handle ping
client.onreadystatechange = null; //remove handler
}
}
start = new Date();
client.open("HEAD", "/ping.txt"); //static file
client.send();
}
function pingDone(start) {
done = new Date();
ms = done.valueOf() - start.valueOf();
alert(ms + "ms ping time");
}
function getClient() {
if (window.XMLHttpRequest)
return new XMLHttpRequest();
if (window.ActiveXObject)
return new ActiveXObject('MSXML2.XMLHTTP.3.0');
throw("No XMLHttpRequest Object Available.");
}
Here's an <iframe> approach:
(source: magnetiq.com)
Create a table (not necessarily in the literal <table> sense) with two columns. The first column will hold the name of servers (and possibly links to them). The second column has iframes that load probe documents from the respective servers. Each probe document does this on the initial fetch request:
Get current system time
Do a redirect (302) to a second probe document while passing the system time as a query parameter
The second probe document reads the current system time, calculates the delta from the initial reading that was passed to it and just displays it in big fat letters. This delta will be the time it took for the server to respond to the client with a redirect response plus the time it took for the client to make the second request to the redirection target. It's not exactly a "ping" but it's a comparable measure of the client's relative latency with each server. In fact, it's a "reverse ping" from the server to the client.
You'd be using iframes without infringing the same-domain policy because there's no attempt at manipulating the iframe contents at all. The player will simply see the values with his/her own eyes and you'll rely on the user glancing at the numbers and clicking on the server link that makes the most sense.
Anything that makes an HTTP request (like most of the answers here) will generally measure a latency that's at least twice of what you'd see for a normal ping, because you'll need the three way handshake and the termination packet at minimum (two round trips rather than one). If you make HTTP requests, try to keep the headers to a minimum. A long enough header (due to a chatty server, or cookies etc on the client) can add additional round trips into the mix, throwing off your measurements.
As Cherona points out, if you already have an active HTTP 2 connection to the server, or if the server speaks HTTP 3, then this may not be the case.
The most accurate option would be to open a websocket connection to each server and measure the time it takes to send a tiny message and receive a tiny response (after the connection has been established).
If you are talking about running something client side, I am not sure this is possible due to security reasons.
Maybe your best bet would be a java applet - but again this needs to be checked against local security policy.
If I try to think about some hack in JS to do this, maybe you can try to send an async request with a callback function which measures the milliseconds it took - but this is just off the top of my head.
It's not that hard to measure server response time in Flash.
Flash must ask for a policy file before accessing remote servers.
The default location for such policy file is at the root folder of the server: /crossdomain.xml
(You can easily find information about the crossdomain file format)
Since such file is needed anyway, why not use it to measure server response time? Load the file itself instead of an image and measure the time it took using getTimer() .
This will give you a good estimate on HTTP connections.
But if you're dealing with game servers, you might want to directly check the speed of the TCP connection. To do that you'll need to use the flash.net.Socket
You'll also have to ask for a policy file first by running:
Security.loadPolicyFile("xmlsocket://server.domain.com:5342");
Where 5342 represents your server's port number where it should respond with the proper XML policy string.
After making the socket connection, any request/response will let you measure different server response times.
The problem with 'file pings' is that you would evaluate the http server response whereas your target resource for the games you serve may have a very different behavior and thereby a different latency.
Just an idea out of the blue, maybe even unrealistic depending on the actual context:
but, wouldn't it be interesting to make a server script based on a short sequence of tasks typically executed by the servers during the gameplay (e.g. opening a RTMP connection, retrieving an information, sending it back). Depending on the total number of servers, you could almost opening them simultaneously and define the first response as winner (subtracting the time your client requires independently to process each query).
Of course this is a quite expensive method server-side-speaking, but at least you would hopefully get a reliable result (server and network latencies summed up). Even if it takes a couple seconds to evaluate, this would be the matter of a fraction of the total enjoyable game-play.
Based on the responses of #Mr. Shiny and #Georg Schölly , a complete and commented example.
In order to test, just copy and paste the codes below in the same order, in a empty .html, .php or other compatible file.
Before start the get, record the current Javascript time.
Using new Date(), we create a new date object with the current date and time.
<script type="text/javascript">
var startTime = new Date();
Now let's create a html object image, still without source, and attribute it to the variable img.
var img = new Image();
The next spet is put a source in the image. The .src reflects the src html attribute.
Important! Point your img.src to a very small and lightweight image file, if possible anything less than 10KB.
To prevent cache a random parameter was added at the end of file, after the .png extension.
var random_string = Math.random().toString();
img.src = "http://static.bbci.co.uk/frameworks/barlesque/5.0.0/orb/4/img/bbc-blocks-dark.png" + "?" + random_string;
Now we may call our function which will run just when the image loads, because the .onload:
img.onload = function() {
var endTime = new Date();
var ping = endTime. getTime() - startTime.getTime();
alert(img.src + " loaded in " + ping + " ms");
}
</script>
Inside the function we have the variable endTime that receives a date time after the source image was loaded.
Lastly, the ping variable receives the initial time minus the final time.
The alert popup shows the result.

Categories