I want to create a choice-based text adventure game using Node.js/Express/Passport with the client side being HTML5 with jQuery. To provide a better understanding of what I want, here's how the game works so far:
The client's browser sends a JSON object to the server via jQuery to set up a new game.
The server starts a new Express session and sends a JSON object with the scene text and choice button data.
The client clicks on a choice, sending the a JSON object to the server with information such as what scene to call next, etc.
This keeps going through-out the game and every so often the server will access the session data to modify the player's inventory, health, etc.
If I want the game to be able to support hundreds of players at once, how would I get Express to handle that many sessions at once? My main concern is overall speed of the node.js server, as well as RAM usage.
100s is not lots :) At that volume, you probably don't need to give this a ton of special consideration - see if you have a problem before solving it. File I/O would likely to be the main bottleneck here so keep the global game data cached in memory as much as possible. If you do find you are hitting scale issues, start by using the node cluster module to leverage multiple CPUs on the same machine... if that still isn't enough, you'd run your node code on multiple servers and stick a load balancer in front of them. Key to scaling is storing session state in a database that any server can access.
Related
I'm developing an online game fully in javascript(both server and client). Due to the fact that people can make custom maps/servers in my game, one of the options for the base server script to upload the map file to the client upon connection is through websockets, but I haven't found anywhere how to limit websockets speed so the server won't lag every time a new person connects and download its map.
One way to getting around this that I had thought was splitting the buffer where the map file is saved(I read it using fs.readFileSync then I make a buffer where it stays until requested) into many small buffers, then upload only one of them per second to the client, thus creating a "fake" upload speed limit for the server, theoretically avoiding lags and/or crashes.
My question is: is that a good idea? Would that work as intended?
I'm currently making plans for a real-time web application. More specifically, it will be an idle game where players can interact with each other. I will use a server storing messages sent from players to other players, which allows for communication when any party is not online.
The game page will load all game assets, such as images and static game data. There will be no need in reloading the page in any time. I'm planning to send more specific game data over web sockets (such as world map data). This may result in somewhat larger packets. I expect these packets to be in a range from very small (chat messages, 2kb) to very large (world data, 100~300 kb). I expect about 3 chat messages to be exchanged per minute per user, to be sent to a number of users (max 10~20). I expect that a user will receive a large set of data every minute or every two minutes.
I'm mainly interested in finding out what kind of performance-related issues may occur, and how to deal with these. A few questions coming to my mind are:
Is it smart to use websockets in this way?
Will it be better to use one socket or to use two sockets (one for small data fragments, one for big data fragments)? Socket connections will be made to the same server.
What are hazards to keep in mind? (I do not mean backward compatability issues here)
Compared to AJAX, do you expect differences in server load? Will these be significant?
What are performance effects to the client and server?
I'm planning to write a real time co-op multiplayer game. Currently I'm in the research phase. I've already written a turn-based game which used websockets and it was working fine.
I haven't tried writing a real time game using this technology however. My questions is about websockets. Is there an alternative way to handle communications between (browser) clients? My idea is to have the game state in each client and only send the deltas to the clients using the server as a mediator/synchronization tool.
My main concern is network speed. I want clients to be able to receive each other's actions as fast as possible so my game can stay real time. I have about 20-30 frames per second with less than a KByte of data per frame (which means a maximum of 20-30 KBytes of data per second per client).
I know that things like "network speed" depend on the connection but I'm interested in the "if all else equals" case.
From a standard browser, a webSocket is going to be your best bet. The only two alternatives are webSocket and Ajax. Both are TCP under the covers so once the connection is established they pretty much offer the same transport. But, a webSocket is a persistent connection so you save connection overhead everytime you want to send something. Plus the persistent connection of the webSocket allows you to send directly from server to client which you will want.
In a typical gaming design, the underlying gaming engine needs to adapt to the speed of the transport between the server and any given client. If the client has a slower connection, then you have to drop the number of packets you send to something that can keep up (perhaps fewer frame updates in your case). The connection speed is what it is so you have to make your app deliver the best experience you can at the speed that there is.
Some other things you can do to optimize the use of the transport:
Collect all data you need to send at one time and send it in one larger send operation rather than lots of small sends. In a webSocket, don't send three separate messages each with their own data. Instead, create one message that contains the info from all three messages.
Whenever possible, don't rely on the latency of the connection by sending, waiting for a response, sending again, waiting for response, etc... Instead, try to parallelize operations so you send, send, send and then process responses as they come in.
Tweak settings for outgoing packets from your server so their is no Nagle delay waiting to see if there is other data to go in the same packet. See Nagle's Algorithm. I don't think you have the ability in a browser to tweak this from the client.
Make sure your data is encoded as efficiently as possible for smallest packet size.
Why make the server push data to get notifications, like using SingleR while it can be made client side?
Using a javascript timing event, that checks for recent updates at specified time intervals user can get notifications as long as he remains connected to the server.
So my question is why do more work at the server that the client can already do?
It's not more work to the server, it's less work. Suppose that you have 10000 clients (and the number could easily be in the 100K or even millions for popular web-sites) polling the server every X seconds to find out if there's new data available for them. The server would have to handle 10000 requests every X seconds even if there's no new data to return to the clients. That's huge overhead.
When the server pushes updates to the clients, the server knows when an update is available and it can send it to just the clients this data is relevant to. This reduces the network traffic significantly.
In addition it makes the client code much simpler, but I think the server is the critical concern here.
First if you didn't use server push you will not get instant update for example you can't do chat application, second why bothering the client to do job that it is not designed to do it? third you will have performance issue on the client cause like #Ash said server is a lot more powerful than a client computer.
Is there any advantages of having two distinct websocket connections to the same server from the same client? To me this seems a bad design choice, but is there any reason why/where it should work out better?
There are several reasons why you might want to do that but they probably aren't too common (at least not yet):
You have both encrypted and unencrypted data that you are sending/receiving (e.g. some of the data is bulky but not sensitive).
You have both streaming data and latency sensitive data: imagine an interactive game that occasionally has streamed video inside the game. You don't want large media streams to delay receipt of latency sensitive normal game messages.
You have both textual (e.g. JSON control messages) and binary data (typed arrays or blobs) and don't want to bother with adding your own protocol layer to distinguish since WebSockets already does this for you.
You have multiple WebSocket sub-protocols (the optional setting after the URI) that you support and the page wants to access more than one (each WebSocket connection is limited to a single sub-protocol).
You have several different WebSocket services sitting behind the same web server and port. The way the client chooses per connection might depend on URI path, URI scheme (ws or wss), sub-protocol, or perhaps even the first message from client to server.
I'm sure there are other reasons but that's all I can think of off the top of my head.
I found that it can make client logic much simpler when you are only subscribing to updates of certain objects being managed by the server. Instead of devising a custom subscription protocol for a single channel, you can just open a socket for each element.
Let's say you obtained a collection of elements via a REST API at
http://myserver/api/some-elements
You could subscribe to updates of a single element using a socket url like this:
ws://myserver/api/some-elements/42/updates
Of course one can argue that this doesn't scale for complex pages. However, for small and simple appications it might make your life a lot easier.