node.js multiple continuous volume tcp/sockets - concurrent execution? - javascript

we have our socket application written in c# and we researching at converting to node.js framework.
I understand node.js is single threaded and execution goes by event loop.
We have multiple send and receive sockets (about 10 send and 10 receive) to different IP's. Basically these are environment, wind, climate and related data that streams continuously round the clock. In other words, a receive socket will never be idle, it always has data to receive continuously and similarly send has something to send continuously.
With that said, will node.js block/backlog or slowdown other 19 sockets when 1 gets into the event loop?
If it does, do I have to run 20 seperate node.exe instances or is there a way to fork it out?
Appreciate your advice's.

Related

If i used setInterval to request data from database, May it damage the server?

I'm trying to develop chat system in php, sql and ajax. I created function by ajax to get messages from database this function its event when window upload, so if i open 2 windows in browser to test the application, I found the messages bu when i send message it appear in just the window which send from not both of the 2 windows. To solve this problem i used setInterval function every 1 second to show messages.
Do this huge requests damage the server ??
I don't quite know what you meant with "Damage", but nothing can be really damaged by a few extra requests.
If you're wondering whether the webserver can handle the load, it really depends on how many chat sessions are going at the same time. Any decent web server should be able to handle a lot more than two requests per second. If you have thousands of chat sessions open, or you have very CPU intensive code, then you may notice issues.
A bigger issue may be your network latency. If your network takes more than a second for a round-trip communication with the server, then you may end up with multiple requests coming from the same client at the same time.

Node + socket.io - weird response time

I have Node.js application which is dedicated to listen and emit websockets only. My stack is up-to-date. I'm using clean Socket.io client server setup from this page:
http://socket.io/docs/
I discovered that once in a while it takes about 15-25 seconds to get a response from node js server. Usually it takes up to 5 ms but one every
20, 30 or 40 calls take up to 25 seconds. I found that out when I send some message to server, receive a response and calculate time spend on that
transaction (like most benchmarking apps do). I tried different configuration, transport method etc, and it's the same.
I run it on Apache/2.2.14. I prepared quickly similar test for Sock.js and response time never goes above 5 ms.
Did anyone have the same issue? What could be the reason? I know I can just use Sock.js but the thing is that big app is already done on Socket.io and it would be hard to rewrite it to use different socket package.
Cheers.
After couple of hours of debugging I found a problem which is... Kaspersky antivirus on my local machine. I've setup socket my server on port 8080. Kaspersky does not block websockets but is scanning this port. And 1 in 20 sockets takes 20 seconds to get a response. Solution: changing the port to something that is not scanned by antiviruses (not sure but 3000 or 843 should do the trick).

What faster alternatives do I have to websockets for a real-time web application?

I'm planning to write a real time co-op multiplayer game. Currently I'm in the research phase. I've already written a turn-based game which used websockets and it was working fine.
I haven't tried writing a real time game using this technology however. My questions is about websockets. Is there an alternative way to handle communications between (browser) clients? My idea is to have the game state in each client and only send the deltas to the clients using the server as a mediator/synchronization tool.
My main concern is network speed. I want clients to be able to receive each other's actions as fast as possible so my game can stay real time. I have about 20-30 frames per second with less than a KByte of data per frame (which means a maximum of 20-30 KBytes of data per second per client).
I know that things like "network speed" depend on the connection but I'm interested in the "if all else equals" case.
From a standard browser, a webSocket is going to be your best bet. The only two alternatives are webSocket and Ajax. Both are TCP under the covers so once the connection is established they pretty much offer the same transport. But, a webSocket is a persistent connection so you save connection overhead everytime you want to send something. Plus the persistent connection of the webSocket allows you to send directly from server to client which you will want.
In a typical gaming design, the underlying gaming engine needs to adapt to the speed of the transport between the server and any given client. If the client has a slower connection, then you have to drop the number of packets you send to something that can keep up (perhaps fewer frame updates in your case). The connection speed is what it is so you have to make your app deliver the best experience you can at the speed that there is.
Some other things you can do to optimize the use of the transport:
Collect all data you need to send at one time and send it in one larger send operation rather than lots of small sends. In a webSocket, don't send three separate messages each with their own data. Instead, create one message that contains the info from all three messages.
Whenever possible, don't rely on the latency of the connection by sending, waiting for a response, sending again, waiting for response, etc... Instead, try to parallelize operations so you send, send, send and then process responses as they come in.
Tweak settings for outgoing packets from your server so their is no Nagle delay waiting to see if there is other data to go in the same packet. See Nagle's Algorithm. I don't think you have the ability in a browser to tweak this from the client.
Make sure your data is encoded as efficiently as possible for smallest packet size.

WebSocket TCP packets clumping together?

Concerning JavaScript & PHP WebSocket TCP packet clumping, with example below.
For some reason, when sending packets quickly on my VPS, or accessing my localhost through a domain pointing at my IP address, multiple packets will clump together. I am trying to stream, for this example, 20 (#100byte) packets per second. On the servers end, they ARE indeed being sent out at a steady rate, exactly every 50ms, making 20 per second.
However, when they get to the client, the client only processes new messages about every 1/4th of a second. Causing new packets only to be received at a rate of 4 per second or so...
What is causing this clumping of packets together? This problem does not occur when everything through localhost. What's weirder is that it streams smoothly on iPhone's iOS Mobile Safari, with no problem at all. BUT, it doesn't work at all on PC Safari, (because I haven't set this up to work correctly with the old Hixie-76 WebSocket format, I'm assuming Mobile Safari is already using the newer RFC 6455 or newer JavaScript compiler) I have tried multiple hosting companies, with the exact same results each time.
See the example below, hosted on InMotion's VPS:
http://www.hovel.me/script/serverControl.php
(Click [Connect] on the left, then [View Game] on the right).
The current packet received will jump about 5 every time, as each 5 packets are received at once, every 1/4th of a second. However, I've seen examples that can send a constant, quick stream of packets.
What causes this clumping together / packets to wait for each other?
EDIT: This HAS to be something to do with Nagle's algorithm, which collects & sends small packets together? I'll work towards trying to bypass this in PHP.
Even with this TCP_NODELAY set in PHP, the problem still stands. Why it works on an iPhone but not a PC is still throwing me off...
EDIT: Setting TCPNoDelay and TcpAckFrequency to 1 in the registry fixes this, but I can't expect every user to do that. There must be a client-side, bread & butter JavaScript way.
How can I have functionality replicating node.js' "socket.setNoDelay(true)", without using node.js?
In the end, the client not recognizing the Nagle's algorithm being disabled, along with it's acknowledge frequency still being set at around 200ms, was causing the intermediate network to hold the following packets in a buffer. Manually sending out an acknowledgement message to the server, every single time the client receives a message, will cause the network to immediately "wake-up" and continue to process the next packets, as opposed to holding them in a buffer.
For Example:
conn = new WebSocket(url);
conn.onmessage = function(evt){
Server.send('ACKNOWLEDGE BYTES'); // Send ACK to server immediately
dispatch('message', evt.data); //Do regular event procedures
};
This temporary solution works, however this will nearly double bandwidth usage, among other network problems. Until I can get the WebSocket on the clients end to correctly not 'standby' for server ack, and the network to push through messages immediately, this gets the packets through quicker without the buffer corking problem.
That's TCP. It is helping you by economizing on IP packets. Part of it is due to the Nagle algorithm but part of it may also be caused by the intermediate network.

Monitoring WebSockets latency

We're building a latency-sensitive web application that uses websockets (or a Flash fallback) for sending messages to the server. While there is an excellent tool called Yahoo Boomerang for measuring bandwidth/latency for web apps, the latency value produced by Boomerang also includes the time necessary to establish HTTP connection, which is not what I need, since the websockets connection is already established and we actually only need to measure the ping time. Is there any way to solve it?
Second, Boomerang appears to fire only once when the page is loaded and doesn't seem to rerun the tests later even if commanded to. Is it possible to force it to run connection tests e.g. every 60 seconds?
Seems pretty trivial to me.
Send PING to the server. Time is t1.
Read PONG response. Time is t2 now.
ping time = t2 - t1
Repeat every once in a while (and optionally report to the stats server).
Obviously, your server would have to know to send PONG in response to a PING command.

Categories