Alrighty! I'm working on small chat add-on for my website, and when a user logs on they'll see the chat history, I'm using a Javascript Object to store all messages in my NodeJS server, now I'd like it so whenever more than fifty entries are in the Object it adds the latest message and removes the oldest, I'd like this to limit my server from handling a lot of messages every time a user logs on. How would I be doing this?
Here's how I store my messages,
var messages = {
"session":[
]
};
messages.session.push(
{
"name":user.name,
"message":safe_tags_replace(m.msg),
"image":user.avatar,
"user":user.steamid,
"rank":user.rank,
}
);
I could also just do loading the last fifty messages in the JSON Object but whenever I run my server for a long time without restarting it this Object will become extremly big, would this be a problem?
Since you are pushing elements to the end of your array, you could just use array shift() to remove the first element of the array if needed. e.g.
var MAX_MESSAGES = 50;
if (messages.session.length > MAX_MESSAGES) { messages.session.shift(); }
To answer the second part of your question:
The more data you hold, the more physical memory you consume on the client machine, obviously. Which can - by itself - be a problem, especially for mobile devices and on old hardware. Also; having huge arrays will impact performance on lookup, iteration, some insert operations and sorting.
Storing JSON objects that contain chat history in the server is not a good idea. For one you are taking up memory that will be held up for an indefinite period. If you have multiple clients all taking to each other, these objects will continue to grow are eventually impact performance. Secondly once the server is restarted, or after your clenan up these objects, the chat history is lost.
The ideal solution is to store message in a database; a simple solution is mongoDB. Whenever a user logs in to the app. Query the db for that users chat history (here you can define how far back you want to go) and send them an initial response contain this data. Then whenever a message is sent, insert that message into the table/collection for future reference. This way the server is only responsible for sending chat history during the initial signon. After that the client is responsible for maintaining any added new message.
Related
My app produces pages of 20 content items that may include items liked by the current user and other users. I use Firebase Realtime Database to keep track of the likes but the content is stored elsewhere and the pages are server-rendered, so a completely new page is served each time. I need to identify and mark the liked items and show the status of each item in real time:
Its number of likes, and
Whether it has been liked by the current user.
To get the number of likes in real time, I loop through the items with this function, modified from the Firebase Friendlypix demo:
function registerForLikesCount(postId, likesCallback=null) {
const likedRef = firebase.database().ref(`/liked/${postId}`);
likedRef.on("value", function(snapshot) {
if (snapshot.val()) {
likesCallback(snapshot.val().count);
}
});
}
That's 20 calls for each page plus an event listener is set, right? It's fast but I don't know about the cost. What resources are used for all those listeners a) if nothing happens, or b) if a like is registered and transmitted to say, 100 concurrent users?
To keep track of whether the current logged-in user has liked any of the items, I've tried two ways:
For each page, grab the user's likes from their Firebase node. That's one call to Firebase, possibly grabbing a few dozen IDs (not more) and, during the above-mentioned loop, check if any of the IDs are included.
Or, using a snippet from Friendlypix, for another 20 calls to Firebase:
function registerToUserLike(postId, callback) {
// Load and listen to new Likes.
const likesRef = firebase.database().ref(`likes/${postId}/${firebase.auth().currentUser.uid}`);
likesRef.on('value', (data) => callback(!!data.val()));
}
registerToUserLike has the advantage of keeping any of the user's open tabs or devices updated and shows off the magic of realtime, but at what price?
I would like to understand what resources are consumed by this activity in order to estimate the cost of running my application.
The overhead on the Firebase protocol for all these listeners is minimal, each sends the path it wants to listen to the server (not a paid operation for Firebase, but your mobile provider may charge for the bandwidth) and then receives the data from the path it listens to and updates to the data at that path.
The number of calls is not a significant part of the cost here, so the only way to reduce the cost would be to listen to less data. In a nutshell: it matters very little whether you have 20 calls listening to 1 node, or 1 call listening to 20 nodes in Firebase.
For more on why this is, see my answer to the questions about Speed up fetching posts for my social network app by using query instead of observing a single event repeatedly
I have a chat app using Firebase as a realtime database and React Native. I'm trying to figure out the most efficient way to set up the listener for chat messages from Firebase in terms of minimizing read operations and transferring data. Here is my data structure:
- messages
- chatId
- messageId
- sentBy
- timestamp
- text
As I see it I have 2 options, either ref.on("child_added) or ref.on("value")
If I use ref.on("child_added"), the advantage is that when a new message is sent then only the newest message is retrieved. The problem though is that when the conversation is loaded the read operation is called for each message in the chat. If a conversation is hundreds of messages long, then that's hundreds of read operations.
The other option is to use ref.on("value"). The problem here is that on every new message added, the entire conversation is resent instead of just the most recent message. The advantage is that when the conversation is loaded, only one read operation is called to transfer the entire conversation to the screen.
I want some combination of the two of these in which when the conversation is loaded, there is one read operation that brings the entire contents of the conversation, AND when a new child node is added (a new message) only that message is transmitted to the listener. How can I achieve this?
firebaser here
There is no difference between the wire traffic for a value listener and child_ listeners on the same location/query. If you check the Network tab of your browser, you can see exactly what is sent retrieved, and you'll see that it's exactly the same between the listener types.
The difference between value and child_* events is purely made client-side to make it easier for you to update the UI. In fact, even when you attach both value and child_* listeners to the same query/location, Firebase will only retrieve the data only once.
The common way to do what you want is to attach both child_* and value listeners to the query/location. Since the value listener is guaranteed to be fired last, you can use that fact to detect when the initial load is done.
Something like:
var chatRef = firebase.database().ref("messages/chatId");
var initialLoadDone = false;
chatRef.on("child_added", (snapshot) => {
if (initialLoadDone) {
...
}
});
chatRef.once("value", (snapshot) => {
snapshot.forEach((messageSnapshot) => {
...
});
initialLoadDone = true;
});
Suggestion: Use Firestore. It maintains a cache of your data and efficiently handles such scenarios.
You can use ref.once('value') to get current nodes only once and then ref.on('child_added') for subsequent additions. More performance notes.
Edit: I believe Firebase Database handles this efficiently by just ref.on('value'). On checking the network tab after adding a new node to my database, I notified the amount of data that got transferred was very low. This might mean that firebase by default caches your previous data. Would recommend you to look at your network tab and take decisions as such or wait from someone from their team show directions.
I am currently developing a game using NodeJS + SocketIO but is having problem with the amount of data being sent. The server currently sends about 600-800 kbps which is not good at all.
Here are my classes:
Shape
Pentagon
Square
Triangle
Entity
Player
Bullet
Every frame (60 fps), I update each of the classes and each class will have an updatePack that will be sent to the client. The updatePack is pretty simple, it only containts the object's id and coords.
At first, I thought everyone's game are like that (silly me). I looked into several simple games like agar.io, slither.io, diep.io, and rainingchain.com and found that they use < 100 kbps which made me realize that I am sending too much data.
Then I looked into compressing the data being sent. But then I found out that data are automatically compressed when sending in Socket.io
Here is how I send my data:
for(var i in list_containing_all_of_my_sockets){
var socket = list_containing_all_of_my_sockets[i];
data = set_data_function();
socket.emit('some message', data);
}
How can I make it send less data? Is there something that I missed?
Opinionated answer, considering a way games handle server-client traffic. This is not the answer:
Rendering is a presentation concern. Your server, which is the single source of truth about the game state, should only care about changing and advertising the game state. 60fps rendering is not a server concern and therefore the server shouldn't be sending 60 updates per second for all moving objects (you might as well be better of just rendering the whole thing on the server and sending it over as a video stream).
The client should know about the game state and know how to render the changing game state at 60fps. The server should only send either state changes or events that change state to the client. In the latter case the client would know how to apply those events to change the state in tandem with the state known to the server.
For example, the server could be just sending the updated movement vectors (or even the acting forces) for each object and the client could be calculating the coordinates of each object based on their currently known movement vectors + the elapsed time.
Maybe its better not to send data every frame, but instead send it only on some particular events (etc. collisions,deaths,spawns)
Whenever a message is send over a network it not only contains the actual data you want to send but also a lot of additional data for routing, error prevention and other stuff.
Since you're sending all your data in individual messages, you'll create these additional information for every single one of them.
So instead you should gather all data you need to send, save it into one object and send this one in a single message.
You could use arrays instead of objects to cut down some size (by omitting the keys). It will be harder to use the data later, but... you've got to make compromises.
Also, by the looks of it, Socket.IO can compress data too, so you can utilize that.
As for updating, I've made a few games using Node.js and Socket.IO. The process is the following:
Player with socket id player1 sends his data (let's say coordinates {x:5, y:5})
Server receives the data and saves it in an object containing all players' data:
{
"player1": {x:5, y:5},
"player2": ...
...
}
Server sends that object to player1.
player1 receives the object and uses the data to visualize the other players.
Basically, a player receives data only after he has sent his own. This way, if his browser crashes, you don't bombard him with data. Otherwise, if you've sent 15 updates while the user's browser hanged, he needs more time to process these 15 updates. During that time, you send even more updates, so the browser needs even more time to process them. This snowballs into a disaster.
When a player receives data only after sending his own, you ensure that:
The sending player gets the other players' data immediately, meaning that he doesn't wait for the server's 60 times-per-second update.
If the player's browser crashes, he no longer sends data and therefore no longer receives it since he can't visualize it anyway.
I'm trying to design a RESTful API to serve data to a front-end JS app, and in future a native mobile app once I get round to writing it.
I'm fairly new to front-end dev, so API designs are also fairly new to me. I'm writing a table tennis league app to start my learning, and one of the endpoints doesn't seem to quite fit with any example I've read of recommended API structures.
I have two entities, leagues and players. A league has a collection of players, and when a result is entered the players switch "position" in the league if the winner was below the loser before the match was entered.
A standard REST API might have endpoints as follows to update the details of a specific player within the league:
(POST/PATCH) - /api/v1/leagues/{league-id}/players/{player-id}
e.g. /api/v1/leagues/1/players/12
This is fine, but in my case, when a result is entered into the web app, 2 different players need their "position" value updating via the API. Ideally, I would have this set as a unique field in the database, so only 1 player can be at each position within the league at any given time. However, if that were the case, using an API endpoint as above, my front-end app would need to calculate the new positions of the players based on the entered result, update player 1, and then if successful update player 2 (rolling back on failure). Following this structure, the position field cannot be made unique, as following the update of player 1, they both have the same position value until player 2 is updated.
The only other solution that I can think of is to have some other appropriately named endpoint that takes a "result" object, does the logic of working out the players new position on the server side, updates accordingly, and returns some data for the UI to re-bind and update to.
So my question is this: which of the 2 methods outlined above would you choose, and why?
If you choose the latter, what data would you return from the API call for the UI to bind to? A full league of player data? Or just the two players that have been updated?
Thanks
I think I see two problems
you haven't defined enough resources
you are confusing http with your domain model
Try something like this
PUT /api/v1/matches/{match-id}
{ winner : { id }, loser : { id }, ... }
Put to the API a message describing the outcome of the game (POST is acceptable, PUT is better for idempotency).
As a side effect of this message's arrival, incorporate the results into your domain model. It's your domain model that should include the rules that describe how the rankings of players change when a game is finished.
When somebody wants to see the rankings of the players...
GET /api/v1/leagues/{league-id}/standings
you send them to a resource that returns a representation of the current rankings in your model.
The spelling of the uri doesn't particularly matter; I prefer "standings" because I believe that's a real thing in your domain. But if you wanted to reflect the data structure of your resources without additional context, you might use a spelling like
GET /api/v1/leagues/{league-id}/players?orderBy=position
The data representation in the body of the request sent to you by the client isn't a serialization of an entity in your domain model, it's a serialization of a message addressed to your domain model.
The choice of where to calculate the positions within the league is really subjective - I would suggest doing it at the server, since it involves searching the database again (other players' scores).
Since you have multiple players and you may update 2 players at a time, it would be better to send the players' scores in the request body with their positions, and return the calculated full league information in the response for each request, because that would simplify your client code and ensure that you get the latest data.
This suggestion is based on the assumption that you do not have large number of players in a league (may be > 100). In that case, I would suggest approach 1 is better.
So your API URL can be..
(POST) /api/v1/leagues/{league-id}
And your request body, can be
"players":[
{"player-id":"101", "newScore":"10"},
{"player-id":"103","newScore":"20"}
]
Your response can be the full list of players in the resulting league.
"players":[
{"player-id":"101", "position":"1"},
{"player-id":"102", "position":"2"},
{"player-id":"103","position":"3"}
{"player-id":"104","position":"4"}
{"player-id":"105","position":"5"}
]
I have a mean.js server running that will allow a user to check their profile. I want to have a setInterval like process running every second, which based on a condition, retrieve data from another server and update the mongoDB (simple-polling / long-polling). This updates the values that the user sees as well.
Q : Is this event loop allowed on nodejs, if so, where does the logic go that would start the interval when the server starts? or can events only be caused by actions (eg, the user clicking their profile to view the data).
Q: What are the implications of having both ends reading and writing to the same DB? Will the collisions just overwrite each other or fault. Is there info on how much read/write would overload it?
I think you can safely do a mongoDB cronjob to update every x day/hour/minutes. In the case of user profile, I assume thats not a critical data which require you to update your DB in real time.
If you need to update in real time, then do a DB replication. Then you point it to a new DB thats replicated on a real time.