NodeJS + Socket IO Sends Too Much Data - javascript

I am currently developing a game using NodeJS + SocketIO but is having problem with the amount of data being sent. The server currently sends about 600-800 kbps which is not good at all.
Here are my classes:
Shape
Pentagon
Square
Triangle
Entity
Player
Bullet
Every frame (60 fps), I update each of the classes and each class will have an updatePack that will be sent to the client. The updatePack is pretty simple, it only containts the object's id and coords.
At first, I thought everyone's game are like that (silly me). I looked into several simple games like agar.io, slither.io, diep.io, and rainingchain.com and found that they use < 100 kbps which made me realize that I am sending too much data.
Then I looked into compressing the data being sent. But then I found out that data are automatically compressed when sending in Socket.io
Here is how I send my data:
for(var i in list_containing_all_of_my_sockets){
var socket = list_containing_all_of_my_sockets[i];
data = set_data_function();
socket.emit('some message', data);
}
How can I make it send less data? Is there something that I missed?

Opinionated answer, considering a way games handle server-client traffic. This is not the answer:
Rendering is a presentation concern. Your server, which is the single source of truth about the game state, should only care about changing and advertising the game state. 60fps rendering is not a server concern and therefore the server shouldn't be sending 60 updates per second for all moving objects (you might as well be better of just rendering the whole thing on the server and sending it over as a video stream).
The client should know about the game state and know how to render the changing game state at 60fps. The server should only send either state changes or events that change state to the client. In the latter case the client would know how to apply those events to change the state in tandem with the state known to the server.
For example, the server could be just sending the updated movement vectors (or even the acting forces) for each object and the client could be calculating the coordinates of each object based on their currently known movement vectors + the elapsed time.

Maybe its better not to send data every frame, but instead send it only on some particular events (etc. collisions,deaths,spawns)

Whenever a message is send over a network it not only contains the actual data you want to send but also a lot of additional data for routing, error prevention and other stuff.
Since you're sending all your data in individual messages, you'll create these additional information for every single one of them.
So instead you should gather all data you need to send, save it into one object and send this one in a single message.

You could use arrays instead of objects to cut down some size (by omitting the keys). It will be harder to use the data later, but... you've got to make compromises.
Also, by the looks of it, Socket.IO can compress data too, so you can utilize that.
As for updating, I've made a few games using Node.js and Socket.IO. The process is the following:
Player with socket id player1 sends his data (let's say coordinates {x:5, y:5})
Server receives the data and saves it in an object containing all players' data:
{
"player1": {x:5, y:5},
"player2": ...
...
}
Server sends that object to player1.
player1 receives the object and uses the data to visualize the other players.
Basically, a player receives data only after he has sent his own. This way, if his browser crashes, you don't bombard him with data. Otherwise, if you've sent 15 updates while the user's browser hanged, he needs more time to process these 15 updates. During that time, you send even more updates, so the browser needs even more time to process them. This snowballs into a disaster.
When a player receives data only after sending his own, you ensure that:
The sending player gets the other players' data immediately, meaning that he doesn't wait for the server's 60 times-per-second update.
If the player's browser crashes, he no longer sends data and therefore no longer receives it since he can't visualize it anyway.

Related

Prevent client from controlling data to server

I've been making a simple socket.io program where the players can move around in an environment, with the client sending the server events.
However, it seems that the client can use the console to falsify data and send it to the server (by using socket.emit()).
Is there a way to combat that, so that the server only accepts "real" data, or to prevent the client from sending false data?
Your server should always hold the state of the application, and have a list of all possible actions for each state.
For example, if your character can move on a map, the server should always keep the coordinate of the player. Let's say the player is at coordinate (x, y). The server will only allow messages that move the player to (x+1, y+1), (x-1, y+1), (x+1, y-1) or (x-1, y-1). Any other message should be discarded.
If it receives a message saying the player wants to move to (x+500, y+500), it should ignore it and potentially mark the player as a cheater and disconnect it.

Messages being delayed when using websockets

I have a program which is using the Websocket TCP: The client is an extension in Chrome and the server is an application written in C++.
When I send small data from the client to the server, it works fine. But when I send large amounts of data (e.g. a source html page), it will be slightly delayed.
For Example:
Client sends: 1,2,3
Server receives: 1,2
Client sends: 4
Server receives: 3
Client sends: 5
Server receives: 4
It's seems like it's a delay.
This is my code client:
var m_cWebsocket = new WebSocket("Servername");
if (m_cWebsocket == null) { return false; }
m_cWebsocket.onopen = onWebsocketOpen(m_cWebsocket); m_cWebsocket.onmessage = onWebsocketMessage;
m_cWebsocket.onerror = onWebsocketError;
m_cWebsocket.onclose = onWebsocketError;
I using m_cWebsocket.send(strMsg) to send data.
Server code
while (true) { recv(sSocket, szBufferTmp, 99990, 0); //recv(sSocket,
szBufferTmp, 99990, MSG_PEEK); //some process }
Since you haven't posted any code to show your implementation of the TCP server or client I can only speculate and try to explain what might be going on here.
That means the potential problems and solutions I outline below may or may not apply to you, but regardless this information should still be helpful to others who might find this question in the future.
TL;DR: (most likely) It's either the server is too slow, the server is not properly waiting for complete 'tcp packets' to be buffered, or the server doesn't know when to properly start and stop and is de-synching while it waits for what it thinks is a 'full packet' as defined by something like a buffer size.
It sounds to me like you are pushing data from the client either faster than the server the server can read, or more likely, the server is buffering a set number of bytes from the current TCP Stream and waiting for the buffer to fill before outputting additional data.
If you are sending this over localhost it's unlikely you are not close to limit of the stream though, and I would expect a server written in C++ would be able to keep up with the javascript client.
So this leads me to believe that the issue is in fact the stream buffer on the C++ side.
Now since the server has no way to know to what data you are sending and or how much of it you are sending, it is common for a TCP stream to utilize a stream buffer that contiguously reads data from the socket until either the buffer has filled to a known size, or until it sees a predefined 'stop character'. This would usually be something like a "line end" or \n character, sometimes \n\r (line feed, carriage feed) depending on your operating system.
Since you haven't specified how you are receiving your data, I'm going to just assume you created either a char or byte buffer of a certain size. I'm a pretty rusty on my C++ socket information so I might be wrong, but I do believe there is a default 'read timeout' on C++ tcp streams as well.
This means you are possibly running into 1 of 2 issues.
Situation 1) You are waiting until that byte/char buffer is filled before outputing it's data. Issue is that will act like a bus that only leaves the station when all seats are filled. If you don't fill all the seats, you server is just sitting and waiting until it gets more data to fill up fully and output your data.
Situation 2) You are running up against the socket read timeout and therefore the function is not getting all the data before outputting the data. This is like a bus that is running by the clock. Every 10 minutes that bus leaves the station, doesn't matter if that bus is full or empty, it's leaving and the next bus will pick up anyone who shows up late. In your case, the TCP stream isn't able to load 1, 2 and 3 onto a bus fast enough, so the bus leaves with just 1, 2 on it because after 20ms of not receiving data, the server is exiting from the function and outputing the data. On the next loop however, there is 3 waiting at the top of the stream buffer ready to get on the next bus out. The Stream will load 3, wait til those 20ms are finished, and then exit before repeating this loop.
I think it's more likely the first situation is occurring though, as I would expect the server to either start catching up, or falling further behind as the 2 servers either begin to sync together, or have internall TPC stream buffer fill up as the server falls further and further behind.
Main point here, you need some way to synchronize the client and the server connections. I would recommend sending a "start byte" and "End byte" to single when a message has begun and finished, so you don't exit the function too early.
Or send a start byte, followed by the packet size in bytes, then filling up the buffer until your buffer has the correct numbers of bytes. Additionally you could include an end byte as well for some basic error checking.
This is a pretty involved topic and hard to really give you a good answer without any code from you, but this should also help anyone in the future who might be having a similar issue.
EDIT I went back and re-read your question and noticed you said it was only with large amounts of data, so I think my original assumption was wrong, and it's more likely situation 2 because the client is sending the data to your server faster than the server can read it, and thus might be bottle necking the connection and the client is only able to send additional data once the server has emptied part of it's TCP stream buffer.
Think of it like a tube of of water. The socket (tube) can only accept (fill up) with so much data (water) before it's full. Once you let some water out the bottom though, you can fill it up a little bit more. The only reason it works for small files is that the file is too small to fill the entire tube.
Additional thoughts: You can see how I was approaching this problem in C# in this question: Continuously reading from serial port asynchronously properly
And another similar question I had previously (again in C#): How to use Task.WhenAny with ReadLineAsync to get data from any TcpClient
It's been awhile since I've played with TCP streams though, so my apologies in that I don't remember all the niche details and caveats of the protocal, but hopefully this information is enough to get you in the ball park for solving your problem.
Full disclaimer, it's been over 2 years since I last touched C++ TCP sockets, and have since worked with sockets/websockets in other languages (such as C# and JavaScript), so I may have some facts wrong about the behavior of C++ TCP sockets specifically, but the core information should still apply. If I got anything wrong, someone in the comments will most likely have the correct information.
Lastly, welcome to stack overflow!

Web client polls backend to check it's up

A web client should only expose some features when a backend API is up and running. Therefor, I'm looking for a clean way to monitor the availability of this backend.
As a quick fix, I made a timer-based function that performs a basic GET on the API root. It's not very clean, generates lots of traffic and pollutes the javascript console with errors (in case of server down).
How should one deal with such situation?
You can trigger something in the lines of this when you need it:
function checkServerStatus()
{
setServerStatus("unknown");
var img = document.body.appendChild(document.createElement("img"));
img.onload = function()
{
setServerStatus("online");
};
img.onerror = function()
{
setServerStatus("offline");
};
img.src = "http://myserver.com/ping.gif";
}
Make ping.gif small (1 pixel) to make it as fast as possible.
Ofc you can do it more smoothly by accessing the API that returns true and keeps a really small response time, but that requires you to do some coding in back-end this simply needs you to place a 1-pixel gif image in a correct directory on a server. You can use any picture already present on the server, but expect more traffic and time as image grows larger.
Now put this in some function that calls it with delay, or simply call this when you need to check status, it's up to you.
If you need a server to send to your app a notification when it's down then you need to implement this:
https://en.wikipedia.org/wiki/Push_technology
Ideally, you would have high-reliability server that has fast response rate and is really reliable to be pinging the desired server in some interval to determine whether it up then use the push to get that information to your app. This way that 3rd server would only send you a push if a status of your app server has changed. Ideally, this server's request has a high priority on your app server queue and servers are well connected and close to each other but not on the same network in case that fails.
Recommendation:
First approach should do you good since it's simple to implement and requires the least amount of knowledge.
Consider second if:
You need a really small interval of checking making your application slower and network traffic higher
You have multiple applications that need the same - making load heavier on both each application, network AND the server. The second approach lets you use single ping to determine truth for all apps.
In order to limit number of request, simple solution can be use of server-sent events. This protocol used on top of HTTP allow server to push multiple updates in response of the same client request.
Client side code (javascript) :
var evtSource = new EventSource("backend.php");
evtSource.onmessage = function(e) {
console.log('status:' + e.data);
}
evtSource.onerror = function(e) {
// add some retry then display error to the user
}
Backend code (PHP, also supported by other languages)
header("Content-Type: text/event-stream\n\n");
while (1) {
// Each 30s, send OK status
echo "OK\n";
ob_flush();
flush();
sleep(30);
}
In both case it will limit number of request (only 1 per "session") but you will have 1 socket per client opened, which can be also to heavy for your server.
If you really want to lower the workload, you should delegate it to external monitoring platform which can expose API to publish backend status.
Maybe it already exists if your backend is hosted on cloud platform.

Limiting entries in Javascript Object

Alrighty! I'm working on small chat add-on for my website, and when a user logs on they'll see the chat history, I'm using a Javascript Object to store all messages in my NodeJS server, now I'd like it so whenever more than fifty entries are in the Object it adds the latest message and removes the oldest, I'd like this to limit my server from handling a lot of messages every time a user logs on. How would I be doing this?
Here's how I store my messages,
var messages = {
"session":[
]
};
messages.session.push(
{
"name":user.name,
"message":safe_tags_replace(m.msg),
"image":user.avatar,
"user":user.steamid,
"rank":user.rank,
}
);
I could also just do loading the last fifty messages in the JSON Object but whenever I run my server for a long time without restarting it this Object will become extremly big, would this be a problem?
Since you are pushing elements to the end of your array, you could just use array shift() to remove the first element of the array if needed. e.g.
var MAX_MESSAGES = 50;
if (messages.session.length > MAX_MESSAGES) { messages.session.shift(); }
To answer the second part of your question:
The more data you hold, the more physical memory you consume on the client machine, obviously. Which can - by itself - be a problem, especially for mobile devices and on old hardware. Also; having huge arrays will impact performance on lookup, iteration, some insert operations and sorting.
Storing JSON objects that contain chat history in the server is not a good idea. For one you are taking up memory that will be held up for an indefinite period. If you have multiple clients all taking to each other, these objects will continue to grow are eventually impact performance. Secondly once the server is restarted, or after your clenan up these objects, the chat history is lost.
The ideal solution is to store message in a database; a simple solution is mongoDB. Whenever a user logs in to the app. Query the db for that users chat history (here you can define how far back you want to go) and send them an initial response contain this data. Then whenever a message is sent, insert that message into the table/collection for future reference. This way the server is only responsible for sending chat history during the initial signon. After that the client is responsible for maintaining any added new message.

Periodic refresh or polling

I am trying to use periodic refresh(ajax)/polling on my site by XMLHttp(XHR) to check if a user has a new message on the database every 10 seconds, then if there is inform him/her by creating a div dynamically like this:
function shownotice() {
var divnotice = document.createElement("div");
var closelink = document.createElement("a");
closelink.onclick = this.close;
closelink.href = "#";
closelink.className = "close";
closelink.appendChild(document.createTextNode("close"));
divnotice.appendChild(closelink);
divnotice.className = "notifier";
divnotice.setAttribute("align", "center");
document.body.appendChild(divnotice);
divnotice.style.top = document.body.scrollTop + "px";
divnotice.style.left = document.body.scrollLeft + "px";
divnotice.style.display = "block";
request(divnotice);
}
Is this a reliable or stable way to check message specifically since when I look under firebug, a lot of request is going on to my database? Can this method make my database down because of too much request? Is there another way to do this since when I login to facebook and check under firebug, no request is happening or going on but I know they are using periodic refresh too... how do they do that?
You can check for new data every 10 seconds, but instead of checking the db, you need to do a lower impact check.
What I would do is modify the db update process so that when it makes a change to some data, it also updates the timestamp on a file to show that there is a recent change.
If you want better granularity than "something changed somewhere in the db" you can break it down by username (or some other identifier). The file(s) to be updated would then be the username for each user who might be interested in the update.
So, when you script asks the server if there is any information for user X newer than time t, instead of making a DB query, the server side script can just compare the timestamp of a file with the time parameter and see if there is anything new in the database.
In the process that is updating the DB, add code that (roughly) does:
foreach username interested in this update
{
touch the file \updates\username
}
Then your function to see if there is new data looks something like:
function NewDataForUser (string username, time t)
{
timestamp ts = GetLastUpdateTime("\updates\username");
return (ts > t);
}
Once you find that there is new data, you can then do a full blown DB query and get whatever information you need.
I left facebook open with firebug running and I'm seeing requests about once a minute, which seems like plenty to me.
The other approach, used by Comet, is to make a request and leave it open, with the server dribbling out data to the client without completing the response. This is a hack, and violates every principle of what HTTP is all about :). But it does work.
This is quite unreliable and probably far too taxing on the server in most cases.
Perhaps you should have a look into a push interface: http://en.wikipedia.org/wiki/Push_technology
I've heard Comet is the most scalable solution.
I suspect Facebook uses a Flash movie (they always download one called SoundPlayerHater.swf) which they use to do some comms with their servers. This does not get caught by Firebug (might be by Fiddler though).
This is not a better approach. Because you ended up querying your server in every 10 seconds even there is no real updates.
Instead of this polling approach, you can simulate the server push (reverrse AJAX or COMET) approach. This will compeletly reduce the server workload and only the client is updated if there is an update in server side.
As per wikipedia
Reverse Ajax refers to an Ajax design
pattern that uses long-lived HTTP
connections to enable low-latency
communication between a web server and
a browser. Basically it is a way of
sending data from client to server and
a mechanism for pushing server data
back to the browser.
For more info, check out my other response to the similar question

Categories