Node.js MySQL Connection Issue - javascript

What's the problem?
I'm using node-mysql to connect to mysql.
I have a really hard time dealing with the server disconnects / wait_timeouts as mentioned here: https://github.com/felixge/node-mysql#server-disconnects
I receive the error message: This socket has been ended by the other party every time after I'm trying to recreate the connection upon handling a PROTOCOL_CONNECTION_LOST error.
What I'm trying to do
// Connect Function
db.connect = function(callback){
// Destory the Connection if there is already one
if(db.connection) {
console.log('[mysql]','connection destroy');
db.connection.destroy();
db.connection = null;
}
// Create the Connection
db.connection = MySQL.createConnection(options);
// Connect using the Connection
db.connection.connect(function(error) {
if(error){
console.log('[mysql]', 'connect failed', error);
} else {
console.log('[mysql]', 'connection success');
db.connection.database = options.database;
callback(db.connection, error);
}
});
// Listen on Connection Errors
db.connection.on('error', function(error) {
// Connection to the MySQL server is usually
// lost due to either server restart, or a
// connnection idle timeout (the wait_timeout server variable configures this)
if(error.code === 'PROTOCOL_CONNECTION_LOST') {
console.log('[mysql]', 'PROTOCOL_CONNECTION_LOST')
db.connect(callback);
} else {
console.log('[mysql] Connection Error: ', error);
}
});
// Return Connection Instance
return db.connection;
}
Additional Details
wait_timeout and interactive_timeout has to be set around 10 seconds in my.cnf to test the issue.
[mysqld]
wait_timeout = 10
interactive_timeout = 10

The library is accidentally trying to write over the TCP socket after a FIN packet has been received, which means the TCP connection is half-closed by the other side. I'll fix this to give you a better error message, but it's still an error; either your network is killing your MySQL connections or something.
Are you using the connection pool? It looks like you are just making a query on a connection after some timeout according to the stack trace. It's possible you are holding onto an inactive connection too long and something on the network or MySQL server is ending the connection.

Related

Socket.io, io.use occur before connection is made?

I'm using the Socket.io use function (https://socket.io/docs/server-api/#namespace-use-fn) to authenticate connections, but I'm not sure if I can disconnect a socket from within that function, because I don't know if the connection is made prior to calling 'use'.
Does Socket.io wait until the completion of io.use before making the connection? If the authentication fails, how can I stop the connection from being made?
My testing has shown that io.on('connection') is not fired until next() is called within the 'use' function. But I've also found that when I include socket.disconnect() inside of the 'use' function, the client will in fact disconnect, but the server still thinks the client is connected, as shown by io.clients.
I've read this SO question which cleared up some details about the 'use' function but didn't address my question: Socket.IO middleware, io.use
io.use(function (socket, next) {
let token = socket.handshake.query.token;
if (verify(token) == false) {
socket.disconnect(); // Can I do this here?
}
next();
});
// This isn't fired until the above 'use' function completes, but
// despite disconnecting above, 'on connection' still fires and `io.clients` shows the client is connected
io.on('connection', function (socket) {
setTimeout(function(){
// Get number of connected clients:
io.clients((error, clients) => {
if (error) throw error;
console.log(clients); // => [PZDoMHjiu8PYfRiKAAAF, Anw2LatarvGVVXEIAAAD]
});
}, 2000);
});
EDIT
I have mostly found the answer to this through testing. next() should only be called if the authentication succeeds. If it fails, just don't call next() and the socket will not connect. However, I do not understand why calling socket.disconnect() inside of the 'use' function creates a scenario where the server thinks it's connected to the client, but the client is actually disconnected.

Node.js "ws" websocket server randomly disconnects client after ~30 seconds, error 1006

I have a simple websocket server like so:
const WebSocket = require('ws');
const wss = new WebSocket.Server( { server: server, path: "/ws" });
wss.on('connection', function connection(ws, req) {
console.log("Connected");
ws.on('message', function incoming(message) {
console.log('received: %s', message);
});
ws.on('close', function close(code, reason) {
console.log("Code: "+code+" | Reason: "+reason);
console.log('disconnected');
});
ws.send('something');
});
I have a client that connects to it fine, but after about 30 seconds of connection, the websocket server closes the connection with an error code of "1006". Google tells me this means the connection was abnormally disrupted. However, I'm not sure what could be causing this. The ws.on('error') callback is also not triggered. How do I keep an indefinite/longer connection with my client?
Nginx has 30 seconds set as the default timeout.
You'll need to increase this number using a proxy_read_timeout directive.
For example, to extend the timeout to a day (which is probably overkill)
proxy_read_timeout 86400s;
Usually I set this in the location section, so that it can be overridden just for the websocket without impacting areas where it can be desirable to have the default.

How to keep mysql connections alive in node.js?

I'm using mysql connection pools in Node JS. After some idle time, the connections expire and the next time I perform a query, a new connection needs to be created. This can cause a delay of several seconds. Unacceptable!
I would like to implement keepalive functionality to periodically poll the database and ensure the consistent health of connections to the backend. I am looking for input from others who have attempted the same, or feedback on my approach.
const mysql = require('mysql');
const pool = createConnectionPool();
setInterval(keepalive, 180000); // 30 mins
function keepalive() {
pool._freeConnections.forEach((connection) => pool.acquireConnection(connection,function () {
connection.query('SELECT 1 + 1 AS solution', function (err) {
if (err) {
console.log(err.code); // 'ER_BAD_DB_ERROR'
}
console.log('Keepalive RDS connection pool using connection id', connection.threadId);
})
}));
}
This keepalive has been somewhat successful:
- once a connection is opened, it stays open
- connections never time out
- if a connection is lost, it is recreated on the next interval
This keepalive is not ideal:
- the mysql connection pool is lazy, only creating and restoring connections as needed. with this keepalive, the pool is no longer lazy. once a connection is opened, the keepalive will keep it open. the pool no longer scales depending on traffic.
- i'm not confident that my method of iterating through the list of free connections and performing a query will be a wise approach. is it possible for the application to check out the same connection from the pool while the same connection is being used by keepalive?
Another possible approach is to ditch the keepalive functionality within the application, and rely on heartbeat traffic to keep a minimum of connections alive.
Has anybody attempted to implement keepalive functionality, use a package or tool that provides this feature, or ended up using a different approach?
did you try this one in case if using pool connection
const pool = mysql.createPool({...});
function keepAlive() {
pool.getConnection(function(err, connection){
if(err) { console.error('mysql keepAlive err', err); return; }
console.log('ping db')
connection.ping(); // this is what you want
connection.release();
});
}
setInterval(keepAlive, 60000); // ping to DB every minute
I don't use pool and this code works
function pingdb() {
var sql_keep = `SELECT 1 + 1 AS solution`;
con.query(sql_keep, function (err, result) {
if (err) throw err;
console.log("Ping DB");
});
}
setInterval(pingdb, 40000);

Suppress "WebSocket connection to 'xyz' failed"

I've written a web application that uses web-sockets. The idea is that my app tries to auto-connect to recently connected to hosts when it starts up. If it can't establish a connection to any of them, then it directs the user to the connection part and asks them to establish a connection manually.
All of this works. In summary, I try each known host in order, and if 200ms later it hasn't connected (`readyState != 1), it tries the next one. All these hosts should be on the LAN so 200ms works pretty reliably. If the last one on the list fails too, then the web opens up a modal directing the user to manually type in a host.
The problem is, by trying to auto-connect, I have to create websockets to my attempted hosts, which outputs error messages like the following to the console:
WebSocket connection to 'ws://lightmate:8080/' failed: Error in
connection establishment: net::ERR_NAME_NOT_RESOLVED
WebSocket connection to 'ws://localhost:8080/' failed: Error in
connection establishment: net::ERR_CONNECTION_REFUSED
While not a fatal flaw by any means, it's unsightly and gets in the way of my debugging.
I've tried to remove it by surrounding the calls to new WebSocket(address) with a try/catch block, and the errors still get through, and I've also tried to set an onerror handler, hoping that would suppress the error messages. Nothing's worked.
connect: function(){
var fulladdr = completeServerAddress(address);
try {
connection = new WebSocket(fulladdr);
connection.suppressErrorsBecauseOfAutoConnection = suppressErrorsBecauseOfAutoConnection; //Store this module-scoped variable in connection, so if the module changes suppression state, this connection won't.
} catch (e){
//Make sure we don't try to send anything down this dead websocket
connection = false;
return false;
}
connection.binaryType = "arraybuffer";
connection.onerror = function(){
if (connection !== false && !connection.suppressErrorsBecauseOfAutoConnection){
Announce.announceMessage("Connection failed with server");
}
connection = false;
};
connection.onmessage = function(m){
rxMessage(ConnectionProtocol.BaseMessage.parseChunk(m.data));
};
connection.onclose = function(){
hooks.swing("disconnected", "", 0);
if (connection !== false && !connection.suppressErrorsBecauseOfAutoConnection){
Announce.announceMessage("Connection lost with server");
}
};
connection.onopen = function(){
sendMessages(ConnectionProtocol.HandshakeMessage.create(name, sources, sinks));
while (idlingmessages.length){
websocketConnection.send(idlingmessages.splice(0,1)[0]);
}
hooks.swing("connected", "", 0);
};
},
Dupl Disclaimer:
This question is similar to this StackOverflow question, but that question is out of date by a year, and the consensus there was "you can't". I'm hoping things have changed since then.
There is no way to trap that error message, which occurs asynchronously to the code where the WebSocket object is created.
More details here: Javascript doesn't catch error in WebSocket instantiation

Node.js domain cluster worker disconnect

Looking at the example given at the nodejs domain doc page: http://nodejs.org/api/domain.html, the recommended way to restart a worker using cluster is to call first disconnect in the worker part, and listen to the disconnect event in the master part. However, if you just copy/paste the example given, you will notice that the disconnect() call does not shutdown the current worker:
What happens here is:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
cluster.worker.disconnect();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2.stack);
}
I do a get request at /error
A timer is started: in 30s the process will be killed if not already
The http server is shut down
The worker is disconnected (but still alive)
The 500 page is displayed
I do a second get request at error (before 30s)
New timer started
Server is already closed => throw an error
The error is catched in the "catch" block and no result is sent back to the client, so on the client side, the page is waiting without any message.
In my opinion, it would be better to just kill the worker, and listen to the 'exit' event on the master part to fork again. This way, the 500 error is always sent during an error:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
cluster.worker.kill();
} catch (er2) {
console.error('Error sending 500!', er2);
}
I'm not sure about the down side effects using kill instead of disconnect, but it seems disconnect is waiting the server to close, however it seems this is not working (at least not like it should)
I just would like some feedbacks about this. There could be a good reason this example is written this way that I've missed.
Thanks
EDIT:
I've just checked with curl, and it works well.
However I was previously testing with Chrome, and it seems that after sending back the 500 response, chrome does a second request BEFORE the server actually ends to close.
In this case, the server is closing and not closed (which means the worker is also disconnecting without being disconnected), causing the second request to be handled by the same worker as before so:
It prevents the server to finish to close
The second server.close(); line being evaluated, it triggers an exception because the server is not closed.
All following requests will trigger the same exception until the killtimer callback is called.
I figured it out, actually when the server is closing and receives a request at the same time, it stops its closing process.
So he still accepts connection, but cannot be closed anymore.
Even without cluster, this simple example illustrates this:
var PORT = 8080;
var domain = require('domain');
var server = require('http').createServer(function(req, res) {
var d = domain.create();
d.on('error', function(er) {
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
console.log('Trying to close the server');
server.close(function() {
console.log('server is closed!');
});
console.log('The server should not now accepts new requests, it should be in "closing state"');
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2);
}
});
d.add(req);
d.add(res);
d.run(function() {
console.log('New request at: %s', req.url);
// error
setTimeout(function() {
flerb.bark();
});
});
});
server.listen(PORT);
Just run:
curl http://127.0.0.1:8080/ http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
New request at: /
Trying to close the server
Error sending 500! [Error: Not running]
Now single request:
curl http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
server is closed!
So with chrome doing 1 more request for the favicon for example, the server is not able to shutdown.
For now I'll keep using worker.kill() which makes the worker not to wait for the server to stops.
I ran into the same problem around 6 months ago, sadly don't have any code to demonstrate as it was from my previous job. I solved it by explicitly sending a message to the worker and calling disconnect at the same time. Disconnect prevents the worker from taking on new work and in my case as i was tracking all work that the worker was doing (it was for an upload service that had long running uploads) i was able to wait until all of them are finished and then exit with 0.

Categories