I've written a web application that uses web-sockets. The idea is that my app tries to auto-connect to recently connected to hosts when it starts up. If it can't establish a connection to any of them, then it directs the user to the connection part and asks them to establish a connection manually.
All of this works. In summary, I try each known host in order, and if 200ms later it hasn't connected (`readyState != 1), it tries the next one. All these hosts should be on the LAN so 200ms works pretty reliably. If the last one on the list fails too, then the web opens up a modal directing the user to manually type in a host.
The problem is, by trying to auto-connect, I have to create websockets to my attempted hosts, which outputs error messages like the following to the console:
WebSocket connection to 'ws://lightmate:8080/' failed: Error in
connection establishment: net::ERR_NAME_NOT_RESOLVED
WebSocket connection to 'ws://localhost:8080/' failed: Error in
connection establishment: net::ERR_CONNECTION_REFUSED
While not a fatal flaw by any means, it's unsightly and gets in the way of my debugging.
I've tried to remove it by surrounding the calls to new WebSocket(address) with a try/catch block, and the errors still get through, and I've also tried to set an onerror handler, hoping that would suppress the error messages. Nothing's worked.
connect: function(){
var fulladdr = completeServerAddress(address);
try {
connection = new WebSocket(fulladdr);
connection.suppressErrorsBecauseOfAutoConnection = suppressErrorsBecauseOfAutoConnection; //Store this module-scoped variable in connection, so if the module changes suppression state, this connection won't.
} catch (e){
//Make sure we don't try to send anything down this dead websocket
connection = false;
return false;
}
connection.binaryType = "arraybuffer";
connection.onerror = function(){
if (connection !== false && !connection.suppressErrorsBecauseOfAutoConnection){
Announce.announceMessage("Connection failed with server");
}
connection = false;
};
connection.onmessage = function(m){
rxMessage(ConnectionProtocol.BaseMessage.parseChunk(m.data));
};
connection.onclose = function(){
hooks.swing("disconnected", "", 0);
if (connection !== false && !connection.suppressErrorsBecauseOfAutoConnection){
Announce.announceMessage("Connection lost with server");
}
};
connection.onopen = function(){
sendMessages(ConnectionProtocol.HandshakeMessage.create(name, sources, sinks));
while (idlingmessages.length){
websocketConnection.send(idlingmessages.splice(0,1)[0]);
}
hooks.swing("connected", "", 0);
};
},
Dupl Disclaimer:
This question is similar to this StackOverflow question, but that question is out of date by a year, and the consensus there was "you can't". I'm hoping things have changed since then.
There is no way to trap that error message, which occurs asynchronously to the code where the WebSocket object is created.
More details here: Javascript doesn't catch error in WebSocket instantiation
Related
I am running an SMTP Server using http://nodemailer.com/extras/smtp-server/ to accept all the mail submissions.
When the mail submission agent uses STARTTLS I get the following error.
5|producer | [2020-10-09 07:28:52] DEBUG [#ff7cqlwi7rat6z2k] C: EHLO qa.mydomain.com
5|producer | [2020-10-09 07:28:52] DEBUG [#ff7cqlwi7rat6z2k] S: 421 mydomain.com You talk too soon
5|producer | [2020-10-09 07:28:52] INFO [#ff7cqlwi7rat6z2k] Connection closed to 91.198.201.301
However, this happens only with some clients and I have tried with few other tools and it upgrades the connection to TLS without any issue.
Below are my server configuration options.
SMTPServerOptions = {
secure: false,
hideSTARTTLS:true,
authOptional: true,
debug: true,
logger: true,
onAuth,
onData
}
if(conf.tls) {
SMTPServerOptions.ca = fs.readFileSync('./certificates/chain.pem', 'ascii')
SMTPServerOptions.key = fs.readFileSync('./certificates/privkey.pem','ascii')
SMTPServerOptions.cert = fs.readFileSync('./certificates/cert.pem','ascii')
}
//creating new SMTP object
const server = new SMTPServer(SMTPServerOptions);
server.on('error', err => {
error(err)
throw err
});
server.listen(conf.server_port);
Was able to solve this by commenting some parts of the code in the nodemailer smtp-server module. Just posting here so that it would help others who are seeking the answer for the same.
Some SMTP clients like the one which I used do not wait for the server response after the connection and sends the EHLO or HELO command to the server. From the source code of the module, these clients are treated as early talkers and the connection is blocked to avoid spamming.
Commenting the timeout function and emitting connectionReady() event immediately solved the problem.
/**
* Initiates the connection. Checks connection limits and reverse resolves client hostname. The client
* is not allowed to send anything before init has finished otherwise 'You talk too soon' error is returned
*/
init() {
// Setup event handlers for the socket
this._setListeners(() => {
// Check that connection limit is not exceeded
if (this._server.options.maxClients && this._server.connections.size > this._server.options.maxClients) {
return this.send(421, this.name + ' Too many connected clients, try again in a moment');
}
// Keep a small delay for detecting early talkers
//setTimeout(() => this.connectionReady(), 100);
// no need to detect the early talkers
this.connectionReady();
});
}
Also disable the reverse lookup if it is taking time.
// disabling the reverse lookup which will solve 'you talk so soon problem'
this.options.disableReverseLookup = true;
What's the problem?
I'm using node-mysql to connect to mysql.
I have a really hard time dealing with the server disconnects / wait_timeouts as mentioned here: https://github.com/felixge/node-mysql#server-disconnects
I receive the error message: This socket has been ended by the other party every time after I'm trying to recreate the connection upon handling a PROTOCOL_CONNECTION_LOST error.
What I'm trying to do
// Connect Function
db.connect = function(callback){
// Destory the Connection if there is already one
if(db.connection) {
console.log('[mysql]','connection destroy');
db.connection.destroy();
db.connection = null;
}
// Create the Connection
db.connection = MySQL.createConnection(options);
// Connect using the Connection
db.connection.connect(function(error) {
if(error){
console.log('[mysql]', 'connect failed', error);
} else {
console.log('[mysql]', 'connection success');
db.connection.database = options.database;
callback(db.connection, error);
}
});
// Listen on Connection Errors
db.connection.on('error', function(error) {
// Connection to the MySQL server is usually
// lost due to either server restart, or a
// connnection idle timeout (the wait_timeout server variable configures this)
if(error.code === 'PROTOCOL_CONNECTION_LOST') {
console.log('[mysql]', 'PROTOCOL_CONNECTION_LOST')
db.connect(callback);
} else {
console.log('[mysql] Connection Error: ', error);
}
});
// Return Connection Instance
return db.connection;
}
Additional Details
wait_timeout and interactive_timeout has to be set around 10 seconds in my.cnf to test the issue.
[mysqld]
wait_timeout = 10
interactive_timeout = 10
The library is accidentally trying to write over the TCP socket after a FIN packet has been received, which means the TCP connection is half-closed by the other side. I'll fix this to give you a better error message, but it's still an error; either your network is killing your MySQL connections or something.
Are you using the connection pool? It looks like you are just making a query on a connection after some timeout according to the stack trace. It's possible you are holding onto an inactive connection too long and something on the network or MySQL server is ending the connection.
Looking at the example given at the nodejs domain doc page: http://nodejs.org/api/domain.html, the recommended way to restart a worker using cluster is to call first disconnect in the worker part, and listen to the disconnect event in the master part. However, if you just copy/paste the example given, you will notice that the disconnect() call does not shutdown the current worker:
What happens here is:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
cluster.worker.disconnect();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2.stack);
}
I do a get request at /error
A timer is started: in 30s the process will be killed if not already
The http server is shut down
The worker is disconnected (but still alive)
The 500 page is displayed
I do a second get request at error (before 30s)
New timer started
Server is already closed => throw an error
The error is catched in the "catch" block and no result is sent back to the client, so on the client side, the page is waiting without any message.
In my opinion, it would be better to just kill the worker, and listen to the 'exit' event on the master part to fork again. This way, the 500 error is always sent during an error:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
cluster.worker.kill();
} catch (er2) {
console.error('Error sending 500!', er2);
}
I'm not sure about the down side effects using kill instead of disconnect, but it seems disconnect is waiting the server to close, however it seems this is not working (at least not like it should)
I just would like some feedbacks about this. There could be a good reason this example is written this way that I've missed.
Thanks
EDIT:
I've just checked with curl, and it works well.
However I was previously testing with Chrome, and it seems that after sending back the 500 response, chrome does a second request BEFORE the server actually ends to close.
In this case, the server is closing and not closed (which means the worker is also disconnecting without being disconnected), causing the second request to be handled by the same worker as before so:
It prevents the server to finish to close
The second server.close(); line being evaluated, it triggers an exception because the server is not closed.
All following requests will trigger the same exception until the killtimer callback is called.
I figured it out, actually when the server is closing and receives a request at the same time, it stops its closing process.
So he still accepts connection, but cannot be closed anymore.
Even without cluster, this simple example illustrates this:
var PORT = 8080;
var domain = require('domain');
var server = require('http').createServer(function(req, res) {
var d = domain.create();
d.on('error', function(er) {
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
console.log('Trying to close the server');
server.close(function() {
console.log('server is closed!');
});
console.log('The server should not now accepts new requests, it should be in "closing state"');
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2);
}
});
d.add(req);
d.add(res);
d.run(function() {
console.log('New request at: %s', req.url);
// error
setTimeout(function() {
flerb.bark();
});
});
});
server.listen(PORT);
Just run:
curl http://127.0.0.1:8080/ http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
New request at: /
Trying to close the server
Error sending 500! [Error: Not running]
Now single request:
curl http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
server is closed!
So with chrome doing 1 more request for the favicon for example, the server is not able to shutdown.
For now I'll keep using worker.kill() which makes the worker not to wait for the server to stops.
I ran into the same problem around 6 months ago, sadly don't have any code to demonstrate as it was from my previous job. I solved it by explicitly sending a message to the worker and calling disconnect at the same time. Disconnect prevents the worker from taking on new work and in my case as i was tracking all work that the worker was doing (it was for an upload service that had long running uploads) i was able to wait until all of them are finished and then exit with 0.
I want to provide a meaningful error to the client when too many users are connected or when they're connecting from an unsupported domain, so...
I wrote some WebSocket server code:
var http = require('http');
var httpServer = http.createServer(function (request, response)
{
// i see this if i hit http://localhost:8001/
response.end('go away');
});
httpServer.listen(8001);
// https://github.com/Worlize/WebSocket-Node/wiki/Documentation
var webSocket = require('websocket');
var webSocketServer = new webSocket.server({ 'httpServer': httpServer });
webSocketServer.on('request', function (request)
{
var connection = request.reject(102, 'gtfo');
});
And some WebSocket client code:
var connection = new WebSocket('ws://127.0.0.1:8001');
connection.onopen = function (openEvent)
{
alert('onopen');
console.log(openEvent);
};
connection.onclose = function (closeEvent)
{
alert('onclose');
console.log(closeEvent);
}
connection.onerror = function (errorEvent)
{
alert('onerror');
console.log(errorEvent);
};
connection.onmessage = function (messageEvent)
{
alert('onmessage');
console.log(messageEvent);
};
All I get is alert('onclose'); with a CloseEvent object logged to the console without any status code or message that I can find. When I connect via ws://localhost:8001 the httpServer callback doesn't come into play, so I can't catch it there. The RFC suggests I should be able to send any status code other than 101 when there's a problem, but Chrome throws an error in the console Unexpected response code: 102. If I call request.reject(101, 'gtfo'), implying it was successful I get a handshake error, as I'd expect.
Not really sure what else I can do. Is it just not possible right now to get the server response in Chrome's WebSocket implementation?
ETA: Here's a really nasty hack in the mean time, I hope that's not what I have to end up doing.
var connection = request.accept(null, request.origin);
connection.sendUTF('gtfo');
connection.close();
I'm the author of WebSocket-Node and I've also posted this response to the corresponding issue on GitHub: https://github.com/Worlize/WebSocket-Node/issues/46
Unfortunately, the WebSocket protocol does not provide any specific mechanism for providing a close code or reason at this stage when rejecting a client connection. The rejection is in the form of an HTTP response with an HTTP status of something like 40x or 50x. The spec allows for this but does not define a specific way that the client should attempt to divine any specific error messaging from such a response.
In reality, connections should be rejected at this stage only when you are rejecting a user from a disallowed origin (i.e. someone from another website is trying to connect users to your websocket server without permission) or when a user otherwise does not have permission to connect (i.e. they are not logged in). The latter case should be handled by other code on your site: a user should not be able to attempt to connect the websocket connection if they are not logged in.
The code and reason that WebSocket-Node allow you to specify here are an HTTP Status code (e.g. 404, 500, etc.) and a reason to include as a non-standard "X-WebSocket-Reject-Reason" HTTP header in the response. It is mostly useful when analyzing the connection with a packet sniffer, such as WireShark. No browser has any facility for providing rejection codes or reasons to the client-side JavaScript code when a connection is rejected in this way, because it's not provided for in the WebSocket specification.
I'm doing a simple UDP "send" using Node's inbuilt datagram UDP socket :
http://nodejs.org/docs/v0.3.1/api/dgram.html
The destination of the message is a domain name that has to be resolved by DNS before transmission.. node.js handles this.
In the event that DNS resolution fails dgram throws a "ENOTFOUND Domain Not Found" error and passes it to the callback that I've registered.
My code is like this:
client = dgram.createSocket("udp4");
client.send(message,
0,
message.length,
this.port,
this.address,
function(err, bytes) {
if (err) {
//get rid of error??
}
}
);
client.close();
I'm not particularly interested in the error.. if it fails, it fails, its not important to the business rules of the application. I'll log it to console for completeness.. BUT I cant stop this exception walking back up the stack and bringing down the application. How do I handle this error?
I dont wish to put a global uhandled exception handler in place just for this. I've tried rethrowing the error inside the callback within a Try/Except handler.. that didn't work.
Any thoughts?
Thanks for reading.
You need to listen for an error event from the socket. If you don't, then node will convert this to an exception. Do not try to handle this with uncaughtException, because the only safe thing to do from uncaughtException is log then exit.
Here is an example of listening for error and causing an intentional DNS error:
var dgram = require('dgram');
var message = new Buffer("Some bytes");
var client = dgram.createSocket("udp4");
client.on("error", function (err) {
console.log("Socket error: " + err);
});
client.send(message, 0, message.length, 41234, "1.2.3.4.5");
This should print:
Socket error: Error: ENOTFOUND, Domain name not found
And the program will continue running.