I have Node.js application which is dedicated to listen and emit websockets only. My stack is up-to-date. I'm using clean Socket.io client server setup from this page:
http://socket.io/docs/
I discovered that once in a while it takes about 15-25 seconds to get a response from node js server. Usually it takes up to 5 ms but one every
20, 30 or 40 calls take up to 25 seconds. I found that out when I send some message to server, receive a response and calculate time spend on that
transaction (like most benchmarking apps do). I tried different configuration, transport method etc, and it's the same.
I run it on Apache/2.2.14. I prepared quickly similar test for Sock.js and response time never goes above 5 ms.
Did anyone have the same issue? What could be the reason? I know I can just use Sock.js but the thing is that big app is already done on Socket.io and it would be hard to rewrite it to use different socket package.
Cheers.
After couple of hours of debugging I found a problem which is... Kaspersky antivirus on my local machine. I've setup socket my server on port 8080. Kaspersky does not block websockets but is scanning this port. And 1 in 20 sockets takes 20 seconds to get a response. Solution: changing the port to something that is not scanned by antiviruses (not sure but 3000 or 843 should do the trick).
Related
I am creating a question answering application using Node.js + Express for my back-end. Front-end sends the question data to the back-end, which in turn makes requests to multiple third-party APIs to get the answer data.
Problem is, some of those third-party APIs take too long to respond, since they have to do some intense processing and calculations. For that reason, i have already implemented a caching system that saves answer data for each different question. Nevertheless, that first request each time might take up to 5 minutes.
Since my back-end server waits and does not respond back to the front-end until data arrives (the connections are being kept open), it can only serve 6 requests concurrently (that's what I have found). This is unacceptable in terms of performance.
What would be a workaround to this problem? Is there a way to not "clog" the server, so it can serve more than 6 users?
Is there a design pattern, in which the servers gives an initial response, and then serves the full data?
Perhaps, something that sets the request to "sleep" and opens up space for new connections?
Your server can serve many thousands of simultaneous requests if things are coded properly and it's not CPU intensive, just waiting for network responses. This is something that node.js is particularly good at.
A single browser, however, will only send a few requests at a time (it varies by browser) to the same endpoint (queuing the others until the earlier ones finish). So, my guess is that you're trying to test this from a single browser. That's not going to test what you really want to test because the browser itself is limiting the number of simultaneous requests. node.js is particularly good at having lots of request in flight at the same time. It can easily do thousands.
But, if you really have an operation that takes up to 5 minutes, that probably won't even work for an http request from a browser because the browser will probably time out an inactive connection still waiting for a result.
I can think of a couple possible solutions:
First, you could make the first http request be to just start the process and have it return immediately with an ID. Then, the client can check every 30 seconds of so after that sending the ID in an http request and your server can respond whether it has the result yet or not for that ID. This would be a client-polling solution.
Second, you could establish a webSocket or socket.io connection from client to server. Then, send a message over that socket to start the request. Then, whenever the server finishes its work, it can just send the result directly to the client over the webSocket or socket.io connection. After receiving the response, the client can either keep the webSocket/socket.io connection open for use again in the future or it can close it.
I have node server working locally and I connect to this server by my webbrowser. I use websocket to communication between client and server. I want to measure accurate time of sending websocket message to the server and from the server. I use javascript function Date.now(), but the problem is that in my results sometimes message was received by server before it was sent by client. I guess that there is difference between client "clock" and server "clock". But there are on the same machine.
I will be gratefull for explanation and links for resources.
As John Resig outlined in one of his blog articles javascripts time is not accurate. The clock you can access with javascript only "ticks" every few milliseconds. That means that it can be roughly ~7ms off the accurate time. Now if your request takes shorter than this small timespan (which can happen if it stays on localhost) it could look like that it was received before it was sent.
real time: 1000ms
js time: 1000ms
travel time: 5ms
real time: 1005ms
js time: 998ms // inaccuracy
difference: -2ms
We can't do anything to improve this as performance.now() was disabled for security reasons. So the only thing we can do is to introduce network latency :), but then the clocks time is probably more off...
we have our socket application written in c# and we researching at converting to node.js framework.
I understand node.js is single threaded and execution goes by event loop.
We have multiple send and receive sockets (about 10 send and 10 receive) to different IP's. Basically these are environment, wind, climate and related data that streams continuously round the clock. In other words, a receive socket will never be idle, it always has data to receive continuously and similarly send has something to send continuously.
With that said, will node.js block/backlog or slowdown other 19 sockets when 1 gets into the event loop?
If it does, do I have to run 20 seperate node.exe instances or is there a way to fork it out?
Appreciate your advice's.
We're building a latency-sensitive web application that uses websockets (or a Flash fallback) for sending messages to the server. While there is an excellent tool called Yahoo Boomerang for measuring bandwidth/latency for web apps, the latency value produced by Boomerang also includes the time necessary to establish HTTP connection, which is not what I need, since the websockets connection is already established and we actually only need to measure the ping time. Is there any way to solve it?
Second, Boomerang appears to fire only once when the page is loaded and doesn't seem to rerun the tests later even if commanded to. Is it possible to force it to run connection tests e.g. every 60 seconds?
Seems pretty trivial to me.
Send PING to the server. Time is t1.
Read PONG response. Time is t2 now.
ping time = t2 - t1
Repeat every once in a while (and optionally report to the stats server).
Obviously, your server would have to know to send PONG in response to a PING command.
I have a machine running node.js (v0.1.32) with a tcp server (tcp.createServer) and a http server (http.createServer). The http server is hit by long polling requests (lasting 50 sec each) from a comet based application on port 80. And there are tcp socket connections on port 8080 from an iphone application for the same purpose.
It was found that the server was not able to handle more connections (especially the tcp connections while the http connections appeared fine!!??) for a while and was normal only after a restart.
For load testing the connections I have created a tcp server and spawned 2000 requests and figured that the connections starting to fail after the max file descriptor limit on machine is reached (default 1024). Which is a really very small number.
So, a novice question here: How do I scale my application to handle more number of connections on node.js and how I handle this issue.
Is there a way to find out how many active connections are there at the moment?
Thanks
Sharief
Hey Joey! I was looking for a unix solution that would help me figure out how many open connections at a given moment anytime on my machine. The reason was my server was not able to handle requests after a certain number of connections. And figured that my machine can handle only 1024 open connections at a time i.e., the ulimit file descriptor value which defaults to 1024. I have modified this value by setting ulimit -n that suits my requirement.
So to check the open connections I used lsof that gives me the list of open files and figured how many connections are open via each port I was using.
You can get the count of connections by using below:
var server = http.createServer(app);
server.getConnections(function(error, count) {
console.log(count);
});
Using this you keep check on connection and when it cross a threshold then close the previous connections. Hope it helps.
I don't know if there's a built-in way to get the number of active connections with Node, but it's pretty easy to rig something up.
For my comet-style Node app I keep an object that I add connections to as a property. Every X seconds I iterate over that object and see if there are any connections that should be closed (in your case, anything past your 50 second limit).
When you close a connection, just delete that property from your connections object. Then you can see how many connections are open at any time with Object.size(connections)