I'm currently working on a quizz app. When a question appears the users has 10 seconds to answer it, otherwise they don't get the points for that question. Once the timer is up, I want to autmatically move to the next question. I am currently facing issues on how to make the 10 second countdown timer "unhackable" by the client.
My initial idea was to use something along the lines of setTimeout() on the client-side for 10 seconds, and once the timer is complete, ask the server to fetch the next question. The problem with this is that the client-side timer can be hacked/modified to run for longer than 10 seconds, potentionally giving some users longer than 10 seconds to answer the question.
client <--- sends question --- server
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server
In order to keep it unhackable, I thought of moving the time checking logic to the sever-side. This would involve keeping two variables (A and B) on the server-side per connected user, one representing the time the question was sent, and the other representing the time an answer was given. The client-side timer would still run, except the server side uses the time-stamps to perform some validation to check if the difference between the timestamps A and B exceeds 10 seconds:
client <--- sends question --- server (send question at timestamp `A`)
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server (receive request at timestamp `B`)
|
+-----------------------------------------------------+
v
server logic:
duration = B - A
if(duration > 10 seconds) {
// allocated time exceeded
}
However, I see a few potentional flaws with this. The time it takes for the question to arrive to the client from the server and the time between when the server sent the questioon (time A) to the time that the client-side timer starts won't be instentationous and will depend on the ping / connection that the user has to the server. Similar ping issues exist when the client asks for the next question. Moreover, I'm worried that if the client-side timer which is supposed to run for 10 seconds lags behind a little, then it would also cause the server-side check to fail. As a result, checking if the duration exceeded 10 seconds isn't enough, and it would require some additional buffer. However, I feel like arbitarly hard-coding the buffer to something like 1 or 2 seconds could potentionally still lead to issues and feels like a bit of a hacky work-around that isn't very robust.
Question: I'm wondering if there is a different approach that I am missing to keep the client side timer unhackable and accurate. I also want to try and avoid creating seperate timers with setTimeout() or alike for each connected user on the server-side, as many users could be connected at one given point in time, and having so many timers queued up on the server feels unresourceful. I also want to try and keep the number of messages sent back and forth between the client and the server to a minimum.
What about a cookie?
Set a cookie with a unique token. Set its expiration to now()+15 seconds. Save the token and time on server side. Keep your client-side timer running with an auto submit after 10 seconds.
When the answer comes in, if there is no cookie... It certainly means the answer was sent after the delay (and the timer was hacked).
So a cookie expiration time of now() + 10 seconds + a grace period of ~5 additionnal seconds is supposed to be way enought to compensate the HTTP delays.
If they hack the timer, the cookie should have expired (and deleted). If they also hack the cookie expiration(!), anyway the token will be used to retreive the question sent datetime and you will compare it with the answer received datetime.
Instead of starting the clock on the server when the question is sent. You could start the clock on the server when the question is shows to the user (on the client).
Maintain 2 clocks one on the client other on the server.
Timestamp every time sensitive request (Start quiz timer and End quiz timer and check if the timestamp discrepancy is within acceptable tolerance.
Related
I was wondering if there was a way to get the time (in ms) between the client sending a message to the server and the server receiving that message
I cannot compare the time in milliseconds with the server and client using Date.now() because every device might be off by a few seconds.
I can find the time for a two way trip, logging the time when I send a message and logging the time again when I receive a message in return from the server. However, The time it takes for a message to get from the client to the server may not be the same as it is for the message to get from the server to the client on a two way trip. So I cant just simply divide this time by 2.
Any suggestions on how I can find this time or at least the difference between Date.now() on the client and the server?
Thanks in advance.
You can achieve this if you first synchronize the clocks of both your server and client using NTP. This requires access to an external server, however you can configure NTP to be installed on your server as well (see ntpd)
There are several modules that implement NTP in node: node-ntp-client or sntp
Here's an example with node-ntp-client:
var ntpClient = require('ntp-client');
var clientOffset = 0;
ntpClient.getNetworkTime("pool.ntp.org", 123, function(err, date) {
if(err) {
console.error(err);
return;
}
clientOffset = Date.now() - date;
});
When sending data to the server, send the timestamp as well:
var clientTimestamp = Date.now() - clientOffset
Server would have its own offset. When receiving the package, it can calculate latency using:
var latency = Date.now() - serverOffset - clientTimestamp;
I was wondering if there was a way to get the time (in ms) between the client sending a message to the server and the server receiving that message
No, there is not. At least, not without a common clock reference.
If I were to mail you a letter, you know what day you received the letter on but you don't know when it was sent. Therefore, you have no idea how long it took the post office to route and deliver the letter to you.
One possible solution is for me to date the letter. When you receive it, you can compare the received date to the date I sent it and determine how many days it was in transit. However, what if I wrote down the wrong date? Suppose I thought it was Friday when it was really Wednesday. Then, you can't accurately determine when it was sent.
Changing this scale back to computers, we'll have to use our realtime clock (RTC) to timestamp the packet we send. Even with reasonable accuracy, our RTCs might be set a minute off from each other. I could send you a packet at 01:23:01.000Z my time, and you might receive it 10 milliseconds later... at 01:23:55.00Z your time and calculate that it took 54 seconds to reach you!
Even if you synchronize with NTP, over the internet, that's potentially 10s to 100s of milliseconds off.
The way very accurate clock synchronization is usually done is via GPS receivers, which by their nature serve as an extremely accurate clock source. If you and I were both very accurately sychronized to GPS receivers, I could send you a packet and you could calculate how long it took.
This is generally impractical, which is why when we ping stuff, we use round-trip time.
I'm using a timer function to check if the session is valid or not every five seconds.
setInterval(checksession,5000);
function checksession(called_for) {
//alert('check-session')
$.ajax({
type:'POST'
,url:'CheckSession'
,success: validateresult
,data: { antiCSRF : '{{acsrf}}',
session_id: '{{session_id}}'}
//,error: function(){ alert('Session check failed') }
})
}
I would like to know what will happen if I have multiple ajax calls at the same time when the session is checked. Will it be 2 separate threads?
Is this the correct way to check session?
So first off, you're better off (imo) using setTimeout for this rather than setInterval. You really want your next check to happen x seconds after you have the answer from the previous check, not every x seconds regardless (b/c of server lag, latency on the network, whatever). Bottom line, imo it's better to do setTimeout then do another `setTimeout in the ajax callback.
JS is single threaded, so you won't do it in two separate threads, but you can have multiple ajax calls pending at once, which I think is what you mean.
Finally, on "correct". It really depends a bit on what you're trying to accomplish. In general, sessions with sliding expirations (the only real time that any 'check' to the server should matter, since otherwise they can get the expiry once and count the time on the client) are there to timeout after a period of inactivity. If you're having your script 'fake' activity by pinging the server every five seconds, then you might as well just set your expiry to infinity and not have it expire ever. Or set it to expire when the browser window closes, either way.
If you're trying to gracefully handle an expired session, the better way to handle it in Ajax is to just handle the 401 error the server should be sending you if you're not logged in anymore.
I'm using Nodejs to write a socket server for my app (using Corona SDK), and so far so good, and I'm using node-mysql (page for node-mysql), and that works too, but... if nothing happens and nothing uses the connection to the db, after 2 minutes (which is the default timeout) the connection closes, while I need it to remain always open as long as the scrip process is still running, can I simply set the timeout attribute of the connection to 0, and that will keep it always open, or do I need to set it for a really long time, like 24 hours (in millisecond) in order to keep it up at all times (I'm putting 24 hours since our server restarts once a day anyway).
Thanks!
Well, it would seem the problem was the mysql server wait_timout, which was set to 300 seconds (5 minutes), and it was what disconnected my script. seeing as I don't really want to change that variable since not only my script uses that DB, for the time being I'm simply executing a light query with a setInterval every 4 minutes, hoping this will keep my connection alive.
If you want to keep your MySQL server always running, you have to set the connectTimeout option to 0. See the the commit "Default connectTimeout to 2 minutes". 0 was the default option before.
A node.js project with modules socket.io and express.
Now each client has a canvas, which runs animations on it. When server emit the initiate parameter, the animation can start.
Now the problem is, there is a Time Gap between clients when their animations start. The longer the animation runs, the more obvious the gap would be. The position of the figures would become really different. But what i want is everybody see the same thing on their screen。
Here's how the server deliver the data:
socket.broadcast.emit('init', initData);
socket.emit('init', initData);
The animation function is in the client, it starts when receiving the initiate data from the server.
I'm not sure if it's because the time receiving these data is different in each client.
So how to reduce this gap?
Many thanks.
I think you should try the following: make sure (using onLoad events and collecting that events on server with socket.io) that every clients downloaded animation, and then send signal to start it.
here is a simple formula and routine that works with socket.io, php or anything really.
I used it to fade in a live video stream 10 seconds before it aired. Given the inherent lag and device performance patterns, and wrong time-zones, you can only expect to get about 30ms of precision forward or backward, but to most observers, it all happens "at the same time".
here is a simulation of a server that's about two minutes behind a simulated client (you), and the server wants the client to show a popup in 1.2 seconds:
//the first two vars should come from the server:
var serverWant=+new Date() - 123456 + 1200; // what server time is the event ?
var serverTime=+new Date() - 123456; // what server time is now ?
//the rest happens in your normal event using available info:
var clientTime=+new Date(); // what client time is now ?
var off= clientTime - serverTime; // how far off is the client from the server?
var clientWant= serverWant + off; // what client time is the event ?
var waitFor = clientWant - +new Date(); // how many millis to time out for until event ?
setTimeout(function(){ alert( 'SNAP!' );}, waitFor);
how reliable is this? try changing both "- 123456"s to "+ 12345"s and see if the popup still waits 1.2 seconds to fire, despite not using Math.abs anywhere in the calculation...
in socket.io, you could send the server time and scheduled time to the client for computation in a pre-event:
socket.broadcast.emit('pre-init', {
serverTime: +new Date(),
serverWant: +new Date() + 1200
});
and then use those values and the above math to schedule the animation in a few moments or a few hours as needed, on-demand (yours) to the split second.
You need Dead Reckoning technique in order to simulate client side state as close to real state on server as possible.
You might send state packages to clients periodically, for example every 200ms (5 times a second), and on client side extrapolate from this data.
Additionally to this, you have to remember about different latency for different clients. So as you want to keep same state there is generally two approaches - interpolation (use last known and one before data), or extrapolation (use last known and predict in future based on own latency).
Extrapolation suits better for real-time interactive stuff, but will have problems with error correction - when client will do wrong prediction (object suddenly stopped but based on delay client predicted it still moved).
Interpolation would make everything pretty much delayed and in a past, but will not suffer from errors as there is no predictions. The drawback of this as you need to wait before interpolating amount of time equal to slowest latency user. This means that slower user will force everyone to be slowed down as well.
Currently I'm developing a user notification alert message function.
I managed to use setInterval to control my Ajax call (to check if there's any notification msg for the user). But my problem is that I only wanted the notification message only
appear once on the page (Now it displays multiple notification alert msg on the screen). I know that you can use setTimeout to make it only call once but I also needed the page to check if there's a new notification message alert in every 5 min.
Second question is it possible the first round calling the Ajax call instantly and then all other calls every 5 min? Because I wanted the system to check instantly once they logged into the system n then afterward every 5 min.
Here is my code
function getAjaxNotice() {
$.post("/async/getnotification", {},
function(response) {
var notice = $(response);
$("#notices").prepend(notice);
});
return false;
}
setInterval("getAjaxNotice()", 50000);
First of all, you should wrap your initialization code in an onLoad function:
$(document).ready(function() {
// Put all your code here
});
Making it appear once is easy, use .html() instead to set the content rather than add to it:
$("#notices").html(notice);
Third, as a style note, you should not pass a string to setInterval(). Rather, pass a function name:
setInterval( getAjaxNotice, 50000 );
Finally, to make it call the function now, and again after every 5 minutes, use:
// Set the timer
setInterval( getAjaxNotice, 50000 );
// Call it once now
getAjaxNotice();
Also note that 50000 is 50 seconds, not 5 minutes. 5 minutes would be 5 * 60 * 1000 = 300000.
For the first problem, you should be storing the return value from setInterval, and then calling clearInterval(myIntervalId) when you receive an alert.
For the second problem, you can call getAjaxNotice once during onload of the body, and then if no alerts are received, call setInterval at that point.
setInterval's time is in milliseconds.
5 minutes * 60 seconds * 1000 milliseconds = 300000ms
Also, I suggest you pass a function to setInterval not a string, so you can avoid the implicit use of eval.
setInterval(getAjaxNotice, 300000);
To call getAjaxNotice at the start of the page, put it in a ready block.
$(function(){
getAjaxNotice();
});
A couple of things...
setInterval("getAjaxNotice()", 50000);
Is not 5 minutes.
5 minutes = 300000 milliseconds.
and if you want it to run instantly and THEN do it every 5 minutes you can simply do
$(document).ready(function() {
getAjaxNotice();
function getAjaxNotice() {
$.post("/async/getnotification" ,
{},
function(response)
{
var notice = $(response);
$("#notices").prepend(notice);
});
return false;
}
setInterval( getAjaxNotice(), 300000 );
});
In your situation it sounds like you are dealing with a few problems. So using your current approach, you can initially make your ajax call and follow it up with a set timeout:
getAjaxNotice();
setTimeout( "getAjaxNotice()", 300000);
Secondly, ensuring the user received the message only once can be done easily if you have some type of "message confirmed" event. Assume your user could have browsers open on multiple computers, if you make the user click the message or click an ok button, or perform some action to acknowledge they received the message, you can fire off another ajax call to delete that message from the buffer on your server, yet still display it on all open browsers. The local browser would only display it once because you could prevent displaying it client side if the message is a duplicate (based on what ever criteria makes sense for your application)
However, you should look into long polling and COMET, http://en.wikipedia.org/wiki/Comet_(programming). Comet is a concept around pushing notifications to web browsers based on server side events, as opposed to web browsers constantly asking the server for changes.
Due to limitations in web frameworks and browsers, this was accomplished with a few technologies, but long-polling seems to be the most prevalent. HTML5 and websockets are trying to make some changes that could prevent polling all together, but its not readily available yet.
Long Polling, http://en.wikipedia.org/wiki/Push_technology, and COMET based architecture have been used by companies like meebo and facebook. Don't quote me on this but for some reason I'm inclined to believe facebook uses an Erlang based webserver to serve their chat messages. Erlang and NodeJs are just a couple of solutions you can use to build light weight web servers that work well with tons of long polling requests hitting your servers.
You should definitely go read up on all these things yourself as there is a wealth of information available. I have experimented with create a NodeJs server on Amazon EC2, as I'm traditionally a .NET job and don't feel IIS is the right solution for supporting an the long polling features of a .net application which uses long polling, and I have to say I like NodeJs alot. Plus the javascript language is much more familiar to me than my limited knowledge of Erlang.