I have a machine running node.js (v0.1.32) with a tcp server (tcp.createServer) and a http server (http.createServer). The http server is hit by long polling requests (lasting 50 sec each) from a comet based application on port 80. And there are tcp socket connections on port 8080 from an iphone application for the same purpose.
It was found that the server was not able to handle more connections (especially the tcp connections while the http connections appeared fine!!??) for a while and was normal only after a restart.
For load testing the connections I have created a tcp server and spawned 2000 requests and figured that the connections starting to fail after the max file descriptor limit on machine is reached (default 1024). Which is a really very small number.
So, a novice question here: How do I scale my application to handle more number of connections on node.js and how I handle this issue.
Is there a way to find out how many active connections are there at the moment?
Thanks
Sharief
Hey Joey! I was looking for a unix solution that would help me figure out how many open connections at a given moment anytime on my machine. The reason was my server was not able to handle requests after a certain number of connections. And figured that my machine can handle only 1024 open connections at a time i.e., the ulimit file descriptor value which defaults to 1024. I have modified this value by setting ulimit -n that suits my requirement.
So to check the open connections I used lsof that gives me the list of open files and figured how many connections are open via each port I was using.
You can get the count of connections by using below:
var server = http.createServer(app);
server.getConnections(function(error, count) {
console.log(count);
});
Using this you keep check on connection and when it cross a threshold then close the previous connections. Hope it helps.
I don't know if there's a built-in way to get the number of active connections with Node, but it's pretty easy to rig something up.
For my comet-style Node app I keep an object that I add connections to as a property. Every X seconds I iterate over that object and see if there are any connections that should be closed (in your case, anything past your 50 second limit).
When you close a connection, just delete that property from your connections object. Then you can see how many connections are open at any time with Object.size(connections)
Related
I have an application that spins up websocket connections on random ports after checking to make sure the port has not yet been assigned. Each connection has a front facing card with a slider to create/destroy the TCP connection based on the port number they are assigned (and stored). It is simple to spin up a server for each socket with the predefined event handling that it comes with but i am unsure of a way to allow the user to kill the tcp connection. What this would look like is the user from the front end woudl slide the toggle intot eh off position and I would take that entities id, query for its port number and would then need to close that port's connection. I am hoping there is a way with node to be able to query for its active servers and act on them as one pleases but I have not found any articles suggesting a way.
I am hoping there is a way with node to be able to query for its active servers and act on them as one pleases but I have not found any articles suggesting a way.
There is no such thing built into node.js.
If you wanted to be able to operate on all the webSocket servers you had started, then you could just add them to an array as you start them.
const serverArray = [];
// code elsewhere that starts a server
let server = new WebSocketServer(someRandomPort);
// push an object into an array that has the port and server
serverArray.push({server, port: someRandomPort});
Then, you could iterate over that array at any time to do something to all of them or to find a server that is using a particular port.
But, it sounds to me like you don't really need multiple webSocket servers. Multiple clients (with however much security you want) can all share the same server. That's the usual client/server design (multiple clients talking to one server).
According to SO accepted answer, the ping timeout must be greater than the ping interval, but according to the examples in official socket.io docs, the timeout is less than the interval. Which one is correct?? Also, what could be the ideal values for both the settings for a shared whiteboard application where the moderator should not be able to edit the canvas when disconnected (internet drops off).
According to the socket.io documentation:
Among those options:
pingTimeout (Number): how many ms without a pong packet to consider the connection closed (60000)
pingInterval (Number): how many ms before sending a new ping packet (25000).
Those two parameters will impact the delay before a client knows the server is not available anymore. For example, if the underlying TCP connection is not closed properly due to a network issue, a client may have to wait up to pingTimeout + pingInterval ms before getting a disconnect event.
This leads me to believe there is no dependency on one value being greater than another. You will likely want to set a higher timeout time to allow for slow network connection to receive a response. The interval time will be the time from a failure to attempt trying again and should be set long enough to allow reconnection, but not so long you are holding the connection.
As for ideal values, this will be application specific. A few things I would consider:
How responsive must your application be?
How long will your application take to respond to a network request?
How large is the data being passed back and forth?
How many concurrent users will you have?
These are just to name a few, for a small local application you would likely be fine with a timeout of 10000 and an interval of 5000, but this is an absolute guess. You will need to consider the previously mentioned bullet points.
I'm considering using redis as a key value store for my api application. The api basically only needs one client connection to the redis. What I'm not sure is that should I keep the connection open forever? Or should I only open the connection when I need to set or get values from redis?
One could think that opening the connection is an expensive operation, so in that sense one should prefer forever connections. On the other hand, keeping the connection always open is not as secure as opening it only when you need it. And also, having long open connections open could result in timeouts. Does redis try to reconnect if the connection fails for some reason? How well does redis handle long open connections? Any help is appreciated!
Redis auto-connection depends on the redis-client that you are using. For example,
if you use ioredis, it will automatically try reconnect when the connection to Redis is lost except when the connection is closed manually.
Source: https://github.com/luin/ioredis#auto-reconnect
The official doc on agent.maxSockets says that it indicates the limit on how many concurrent sockets my http(s) server can have. So I did some tests with http.globalAgent.maxSockets set to 5 and I expected that I can have only 5 open websockets. But turns out I can have more than 50 open websockets. Can anybody explain what does agent.maxSockets really mean?
http.Agent instances are used with outbound http clients (e.g. via http.request()), not inbound clients into an http.Server. So if you were to use an http.Agent with maxSockets set to 5 with http.request(), then there would only be at most 5 connected sockets to a particular server at any given time.
I have Node.js application which is dedicated to listen and emit websockets only. My stack is up-to-date. I'm using clean Socket.io client server setup from this page:
http://socket.io/docs/
I discovered that once in a while it takes about 15-25 seconds to get a response from node js server. Usually it takes up to 5 ms but one every
20, 30 or 40 calls take up to 25 seconds. I found that out when I send some message to server, receive a response and calculate time spend on that
transaction (like most benchmarking apps do). I tried different configuration, transport method etc, and it's the same.
I run it on Apache/2.2.14. I prepared quickly similar test for Sock.js and response time never goes above 5 ms.
Did anyone have the same issue? What could be the reason? I know I can just use Sock.js but the thing is that big app is already done on Socket.io and it would be hard to rewrite it to use different socket package.
Cheers.
After couple of hours of debugging I found a problem which is... Kaspersky antivirus on my local machine. I've setup socket my server on port 8080. Kaspersky does not block websockets but is scanning this port. And 1 in 20 sockets takes 20 seconds to get a response. Solution: changing the port to something that is not scanned by antiviruses (not sure but 3000 or 843 should do the trick).