node-mysql connectTimeout - javascript

I'm using Nodejs to write a socket server for my app (using Corona SDK), and so far so good, and I'm using node-mysql (page for node-mysql), and that works too, but... if nothing happens and nothing uses the connection to the db, after 2 minutes (which is the default timeout) the connection closes, while I need it to remain always open as long as the scrip process is still running, can I simply set the timeout attribute of the connection to 0, and that will keep it always open, or do I need to set it for a really long time, like 24 hours (in millisecond) in order to keep it up at all times (I'm putting 24 hours since our server restarts once a day anyway).
Thanks!

Well, it would seem the problem was the mysql server wait_timout, which was set to 300 seconds (5 minutes), and it was what disconnected my script. seeing as I don't really want to change that variable since not only my script uses that DB, for the time being I'm simply executing a light query with a setInterval every 4 minutes, hoping this will keep my connection alive.

If you want to keep your MySQL server always running, you have to set the connectTimeout option to 0. See the the commit "Default connectTimeout to 2 minutes". 0 was the default option before.

Related

JavaScript unhackable countdown timer

I'm currently working on a quizz app. When a question appears the users has 10 seconds to answer it, otherwise they don't get the points for that question. Once the timer is up, I want to autmatically move to the next question. I am currently facing issues on how to make the 10 second countdown timer "unhackable" by the client.
My initial idea was to use something along the lines of setTimeout() on the client-side for 10 seconds, and once the timer is complete, ask the server to fetch the next question. The problem with this is that the client-side timer can be hacked/modified to run for longer than 10 seconds, potentionally giving some users longer than 10 seconds to answer the question.
client <--- sends question --- server
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server
In order to keep it unhackable, I thought of moving the time checking logic to the sever-side. This would involve keeping two variables (A and B) on the server-side per connected user, one representing the time the question was sent, and the other representing the time an answer was given. The client-side timer would still run, except the server side uses the time-stamps to perform some validation to check if the difference between the timestamps A and B exceeds 10 seconds:
client <--- sends question --- server (send question at timestamp `A`)
|
start timer for 10 seconds (as this is client-side, it could easily be extended)
|
.
10 seconds later
.
V
client --- ask for next question / send answer ---> server (receive request at timestamp `B`)
|
+-----------------------------------------------------+
v
server logic:
duration = B - A
if(duration > 10 seconds) {
// allocated time exceeded
}
However, I see a few potentional flaws with this. The time it takes for the question to arrive to the client from the server and the time between when the server sent the questioon (time A) to the time that the client-side timer starts won't be instentationous and will depend on the ping / connection that the user has to the server. Similar ping issues exist when the client asks for the next question. Moreover, I'm worried that if the client-side timer which is supposed to run for 10 seconds lags behind a little, then it would also cause the server-side check to fail. As a result, checking if the duration exceeded 10 seconds isn't enough, and it would require some additional buffer. However, I feel like arbitarly hard-coding the buffer to something like 1 or 2 seconds could potentionally still lead to issues and feels like a bit of a hacky work-around that isn't very robust.
Question: I'm wondering if there is a different approach that I am missing to keep the client side timer unhackable and accurate. I also want to try and avoid creating seperate timers with setTimeout() or alike for each connected user on the server-side, as many users could be connected at one given point in time, and having so many timers queued up on the server feels unresourceful. I also want to try and keep the number of messages sent back and forth between the client and the server to a minimum.
What about a cookie?
Set a cookie with a unique token. Set its expiration to now()+15 seconds. Save the token and time on server side. Keep your client-side timer running with an auto submit after 10 seconds.
When the answer comes in, if there is no cookie... It certainly means the answer was sent after the delay (and the timer was hacked).
So a cookie expiration time of now() + 10 seconds + a grace period of ~5 additionnal seconds is supposed to be way enought to compensate the HTTP delays.
If they hack the timer, the cookie should have expired (and deleted). If they also hack the cookie expiration(!), anyway the token will be used to retreive the question sent datetime and you will compare it with the answer received datetime.
Instead of starting the clock on the server when the question is sent. You could start the clock on the server when the question is shows to the user (on the client).
Maintain 2 clocks one on the client other on the server.
Timestamp every time sensitive request (Start quiz timer and End quiz timer and check if the timestamp discrepancy is within acceptable tolerance.

MEAN.JS setInterval process for event loop (gets data from another server)

I have a mean.js server running that will allow a user to check their profile. I want to have a setInterval like process running every second, which based on a condition, retrieve data from another server and update the mongoDB (simple-polling / long-polling). This updates the values that the user sees as well.
Q : Is this event loop allowed on nodejs, if so, where does the logic go that would start the interval when the server starts? or can events only be caused by actions (eg, the user clicking their profile to view the data).
Q: What are the implications of having both ends reading and writing to the same DB? Will the collisions just overwrite each other or fault. Is there info on how much read/write would overload it?
I think you can safely do a mongoDB cronjob to update every x day/hour/minutes. In the case of user profile, I assume thats not a critical data which require you to update your DB in real time.
If you need to update in real time, then do a DB replication. Then you point it to a new DB thats replicated on a real time.

how to update chat window with new messages

setInterval(function{
//send ajax request and update chat window
}, 1000)
is there any better way to update the chat with new messages? is this the right way to update the chat using setInterval?
There are two major options (or more said popular ways)
Pulling
First is pulling, this is what you are doing. Every x (milli)seconds you check if the server config has changed.
This is the html4 way (excluding flash etc, so html/js only). For php not the best way because you make for a sinle user a lot of connections per minute (in your example code at least 60 connections per second).
It is also recommended to wait before the response and then wait. If for example you request every 1 second for an update, but your response takes 2 seconds, you are hammering your server. See tymeJV answer for more info
Pushing
Next is pushing. This is more the HTML5 way. This is implemented by websockets. What is happining is the client is "listing" to a connection and waiting to be updated. When it is updated it will triger an event.
This is not great to implement in PHP because well you need a constanct connection, and your server will be overrun in no time because PHP can't push connections to the background (like Java can, if I am correct).
I made personally a small chat app and used pusher. It works perfectly. I only used the free version so don't know how expensive it is.
Pretty much yes, one minor tweak, rather than encapsulate an AJAX call inside an interval (this could result in pooling of unreturned requests if something goes bad on the server), you should throw a setTimeout into the AJAX callback to create a recursive call. Consider:
function callAjax() {
$.ajax(options).done(function() {
//do your response
setTimeout(callAjax, 2000);
});
}
callAjax();

How to reduce time difference between clients receiving data in socket.io?

A node.js project with modules socket.io and express.
Now each client has a canvas, which runs animations on it. When server emit the initiate parameter, the animation can start.
Now the problem is, there is a Time Gap between clients when their animations start. The longer the animation runs, the more obvious the gap would be. The position of the figures would become really different. But what i want is everybody see the same thing on their screen。
Here's how the server deliver the data:
socket.broadcast.emit('init', initData);
socket.emit('init', initData);
The animation function is in the client, it starts when receiving the initiate data from the server.
I'm not sure if it's because the time receiving these data is different in each client.
So how to reduce this gap?
Many thanks.
I think you should try the following: make sure (using onLoad events and collecting that events on server with socket.io) that every clients downloaded animation, and then send signal to start it.
here is a simple formula and routine that works with socket.io, php or anything really.
I used it to fade in a live video stream 10 seconds before it aired. Given the inherent lag and device performance patterns, and wrong time-zones, you can only expect to get about 30ms of precision forward or backward, but to most observers, it all happens "at the same time".
here is a simulation of a server that's about two minutes behind a simulated client (you), and the server wants the client to show a popup in 1.2 seconds:
//the first two vars should come from the server:
var serverWant=+new Date() - 123456 + 1200; // what server time is the event ?
var serverTime=+new Date() - 123456; // what server time is now ?
//the rest happens in your normal event using available info:
var clientTime=+new Date(); // what client time is now ?
var off= clientTime - serverTime; // how far off is the client from the server?
var clientWant= serverWant + off; // what client time is the event ?
var waitFor = clientWant - +new Date(); // how many millis to time out for until event ?
setTimeout(function(){ alert( 'SNAP!' );}, waitFor);
how reliable is this? try changing both "- 123456"s to "+ 12345"s and see if the popup still waits 1.2 seconds to fire, despite not using Math.abs anywhere in the calculation...
in socket.io, you could send the server time and scheduled time to the client for computation in a pre-event:
socket.broadcast.emit('pre-init', {
serverTime: +new Date(),
serverWant: +new Date() + 1200
});
and then use those values and the above math to schedule the animation in a few moments or a few hours as needed, on-demand (yours) to the split second.
You need Dead Reckoning technique in order to simulate client side state as close to real state on server as possible.
You might send state packages to clients periodically, for example every 200ms (5 times a second), and on client side extrapolate from this data.
Additionally to this, you have to remember about different latency for different clients. So as you want to keep same state there is generally two approaches - interpolation (use last known and one before data), or extrapolation (use last known and predict in future based on own latency).
Extrapolation suits better for real-time interactive stuff, but will have problems with error correction - when client will do wrong prediction (object suddenly stopped but based on delay client predicted it still moved).
Interpolation would make everything pretty much delayed and in a past, but will not suffer from errors as there is no predictions. The drawback of this as you need to wait before interpolating amount of time equal to slowest latency user. This means that slower user will force everyone to be slowed down as well.

jquery/javascript setInterval

Currently I'm developing a user notification alert message function.
I managed to use setInterval to control my Ajax call (to check if there's any notification msg for the user). But my problem is that I only wanted the notification message only
appear once on the page (Now it displays multiple notification alert msg on the screen). I know that you can use setTimeout to make it only call once but I also needed the page to check if there's a new notification message alert in every 5 min.
Second question is it possible the first round calling the Ajax call instantly and then all other calls every 5 min? Because I wanted the system to check instantly once they logged into the system n then afterward every 5 min.
Here is my code
function getAjaxNotice() {
$.post("/async/getnotification", {},
function(response) {
var notice = $(response);
$("#notices").prepend(notice);
});
return false;
}
setInterval("getAjaxNotice()", 50000);
First of all, you should wrap your initialization code in an onLoad function:
$(document).ready(function() {
// Put all your code here
});
Making it appear once is easy, use .html() instead to set the content rather than add to it:
$("#notices").html(notice);
Third, as a style note, you should not pass a string to setInterval(). Rather, pass a function name:
setInterval( getAjaxNotice, 50000 );
Finally, to make it call the function now, and again after every 5 minutes, use:
// Set the timer
setInterval( getAjaxNotice, 50000 );
// Call it once now
getAjaxNotice();
Also note that 50000 is 50 seconds, not 5 minutes. 5 minutes would be 5 * 60 * 1000 = 300000.
For the first problem, you should be storing the return value from setInterval, and then calling clearInterval(myIntervalId) when you receive an alert.
For the second problem, you can call getAjaxNotice once during onload of the body, and then if no alerts are received, call setInterval at that point.
setInterval's time is in milliseconds.
5 minutes * 60 seconds * 1000 milliseconds = 300000ms
Also, I suggest you pass a function to setInterval not a string, so you can avoid the implicit use of eval.
setInterval(getAjaxNotice, 300000);
To call getAjaxNotice at the start of the page, put it in a ready block.
$(function(){
getAjaxNotice();
});
A couple of things...
setInterval("getAjaxNotice()", 50000);
Is not 5 minutes.
5 minutes = 300000 milliseconds.
and if you want it to run instantly and THEN do it every 5 minutes you can simply do
$(document).ready(function() {
getAjaxNotice();
function getAjaxNotice() {
$.post("/async/getnotification" ,
{},
function(response)
{
var notice = $(response);
$("#notices").prepend(notice);
});
return false;
}
setInterval( getAjaxNotice(), 300000 );
});
In your situation it sounds like you are dealing with a few problems. So using your current approach, you can initially make your ajax call and follow it up with a set timeout:
getAjaxNotice();
setTimeout( "getAjaxNotice()", 300000);
Secondly, ensuring the user received the message only once can be done easily if you have some type of "message confirmed" event. Assume your user could have browsers open on multiple computers, if you make the user click the message or click an ok button, or perform some action to acknowledge they received the message, you can fire off another ajax call to delete that message from the buffer on your server, yet still display it on all open browsers. The local browser would only display it once because you could prevent displaying it client side if the message is a duplicate (based on what ever criteria makes sense for your application)
However, you should look into long polling and COMET, http://en.wikipedia.org/wiki/Comet_(programming). Comet is a concept around pushing notifications to web browsers based on server side events, as opposed to web browsers constantly asking the server for changes.
Due to limitations in web frameworks and browsers, this was accomplished with a few technologies, but long-polling seems to be the most prevalent. HTML5 and websockets are trying to make some changes that could prevent polling all together, but its not readily available yet.
Long Polling, http://en.wikipedia.org/wiki/Push_technology, and COMET based architecture have been used by companies like meebo and facebook. Don't quote me on this but for some reason I'm inclined to believe facebook uses an Erlang based webserver to serve their chat messages. Erlang and NodeJs are just a couple of solutions you can use to build light weight web servers that work well with tons of long polling requests hitting your servers.
You should definitely go read up on all these things yourself as there is a wealth of information available. I have experimented with create a NodeJs server on Amazon EC2, as I'm traditionally a .NET job and don't feel IIS is the right solution for supporting an the long polling features of a .net application which uses long polling, and I have to say I like NodeJs alot. Plus the javascript language is much more familiar to me than my limited knowledge of Erlang.

Categories