Use React to create a 100K multi-player tile game - javascript

I'm learning React.
I'd like to create a game with a basic tile-board (like here http://richard.to/projects/beersweeper/ but where tiles can have two states ('available' or 'already clicked').
In terms of speed, React looks interesting as with its virtual DOM/diffs, I could only adjust the css and text inside tiles that have been clicked (so that they visually differ form those still not clicked by anyone).
My goal (and personal challenge haha) is to make this game playable by 1000 simultaneous players who can click where they want on a 100,000-tiles board.(distribution among clients in real time of tile status would be done with Firebase)
Should I use basic standard React and its built-in features (onclick events,ts listeners...) or is it impossible with only-React to enable that many events/listeners for 1000 people on 100K tiles in real time with any user being able to click anywhere (on available tiles) ?
Should I use alternative/complentary tools and techniques such as canvas, React Art, GPU acceleration, webgl, texture atlases....?

WebGL is the right answer. It's also quite complicated to work with.
But depending on the size of the tiles, React could work but you can't render 100k dom nodes performantly... no matter how you do it. Instead, you need to render the subset of tiles visible to the user.
To pull something like this off, you'll need to have a lot of optimized code, and firebase most likely won't be up to par. I'd recommend a binary protocol over websockets and a database that makes sense (fast lookups on multiple numeric index ranges, and subscriptions).
Ultimately, I'd probably go with:
webgl (compare three.js and pixi.js)
custom data server in golang (with persistence/fallback managed by a mysql engine like maria or aws's aurora)
websocket server written in golang
websockets (no wrapper library, binary protocol)
The only reason for golang over node.js for the websocket server is CPU performance, which means lower latency and more clients per server. They perform about the same for the network aspect.
You'll probably ignore most of this, but just understand that if you do have performance problems, switching some of the parts out for these will help.
Do a prototype that handles 2 concurrent users and 1000 tiles, and go from there. In order of priority:
don't render 100k dom nodes!
webgl instead of dom
binary websocket protocol, over socket.io or similar (not firebase)
custom data server in go
binary websocket protocol not using socket.io (e.g. ws package in node)
websocket server in go (not really that important, maybe never)

Lots of people use React as the V in MVC.
I believe that react is fine for UI but you should ask yourself what will be the server side logic, you still have to think about M and C
If you are looking for 1000 simultaneous users load, the keyword scalable will be your friend.
Also you should check out Node.js for the server side service. Express.js for it's fast implementation, and finally Pomelo.js which is a js game server implementation, based on node.js
On the subject of performance.. WebGL will most likely boost your performance. Here you can grab a nice tutorial on the topic : https://github.com/alexmackey/IntroToWebGLWithThreeJS
If you want to build it without GL languages whatsoever, you should digg deeper into JavaScript create your own pseudo-class library with dynamic data bindings. Otherwise you might end up using small percentage of a powerful framework that will only slow down your API.
I would restrain from using canvas, as they are good for model visualization rather as game front-end. Checkout d3.js for it's awesomeness and unfortunately performance issues.
Here I have written a nice fiddle, that creates 100x100 matrix with hovers, and perfromance is not so good. You can easly tweak it to get 100k element matrix: https://jsfiddle.net/urahara/zL0fxyn3/2/
EDIT: WebGL is the only reasonable solution.

Related

Inner workings of browser based real-time MMO games

So, suppose there was a game which consisted of a website, and a client that you can launch from said website. I've looked a bit and a relatable example would be Habbo Hotel.
What I'm asking is, what are all the different parts that would make such a game work: for the website part, I'd imagine a server, a database, and some HTML, CSS and PhP coding would be required, but how would the client side operate?
More specifically, how would the client-to-server (and vice versa) real-time communications happen?
Suppose the client be coded in C, how would the integration of C into a (I suppose PhP-framed) browser window be executed?
Note that the client is never downloaded on the user's PC, so where would it reside?
I'm sorry if these are a lot of questions, if the answers were to be too tedious to compose, feel free to just leave some documentation or tutorials (which I've looked for but haven't really been able to find), I'll happily read them on my own. Thanks in advance.
On one side your question is too broad but on the other side I can give you some pointers of how to do this in a modern way:
don't have a client, just a page in a browser
use HTML5 canvas, you may also want to look into SPA (single page application)
connect via websocket, there are HTML5 javascript implementations and PHP or node.js for the server-side
best is, use node.js on the server, PHP would be way too cumbersome
via websocket, send and receive JSON objects
host node.js on its native platform (Linux)
you may want to look into phaser as an HTML5 client-side canvas rendering framework, but it lacks many functionality and is mainly oriented towards twitch-based action games, which don't work well with this architecture because of lag
this will lead you to one conclusion: javascript is at the center of this system. you'll encounter several roadblocks, such as:
security on websockets with SSL for login
avoid SSL for real-time data (above 1 Hz)
UI on the client inside the canvas is not easy to implement, you'll have to re-invent the wheel or find a UI library for that
expect lag, the network code will take some 20%-30% overhead in respect to native C/C# code using TCP/IP or UDP (Lidgren?) and protobuf (Lidgren+protobuf) is what high-frequency AAA titles use (MMORPG or FPS alike)
From the questions you asked I sense a great lack of understanding and knowledge about the field. I guess you'll have to study some 6-12+ months beforehand. This is what I recommend, because if you start right away you'll make a lot of errors and waste your time. If above are any names you don't know about, search them and study them well. And don't start to code, there is a very steep learning curve ahead of you!
EDIT: more about server-side
If you want to implement an action-based interactive game (like an FPS or 2D shooter) I have to tell you this.
You may want to look into Unity 3D, using directly TCP/IP connections and binary messages (no HTTP, no websocket, instead protobuf).
C# (client-side) and node.js (server-side) are a good combination. For horizontal scaling you may want to look into cloud computing, docker, provisioning and a lot of server security.
But this is hostile terrain, it leads you into DevOps territory, which is way off game development. More like an architect's job. Imagine that 3-tier system (client + server + database) has a bottleneck on the server.
You want to spawn more nodes to handle more clients. This is what EVERY lobby-based game does (LoL, Overwatch, WoT, WoW instances, and so on) and what you do for partitioned maps (e.G. the "maps" in LOTRO, RIFT, many more MMORPGS). Also, to mirror (which means multiple instances of the same map to accomodate an overpopulated crowd).
To have this kind of horizontal scaling your servers must go online/offline on their own without you clicking around on command and control (e.G. puppet and similar software).
While this is the ultimate approach, it also has the most steep learning curve, especially because of security (advert DDOS, flooding, slow-loris, fake clients, and the list goes on). A new node must be "provisioned" on the fly (e.G. cloud-config) before it attaches to the cluster and goes online, so there's a whole new world of pain and learning.
The center-piece of such an elastic cloud-based server system is SSO (single sign-on).

Porting OpenGL Programs from C to WebGL and Javascript

I'm trying to port a very complex 3D modeling program(written in C) to webGL. The program has it's own physics engine written from scratch, and I would like to use the transform data which is output by the physics engine as matrices to transform objects rendered in a webPage.
The program is so massive that I would like to keep the physics engine in C as is, but take the graphics part into the browser.
My crazy idea is to have the physics engine running constantly on the server, and then stream the transformation matrices to the client and apply the transformations to pre-rendered WebGL objects.
Is this possible to do?
Clarification: The program is a viewer, so all of the physics backend is isolated from user input. The user will, however, be able to manipulate camera angles on the client side.
Update: I've decided on implementing the following solution, let me know if any of this is wrong: I will host the C program as a daemon using node.js, and pump data using websockets to the front end - which is pixi.js(for 2D elements) and babylon.js(or three.js)(for 3D elements). The data will be comprised of JSON objects(quaternions or sine/cosine matricies) that will be handled on the front end in javascript and applied once per second(fps doesn't matter in my situation, so it's okay)
Push and pop matrix are auxiliary (not a core part of the rendering pipeline) so you can replicate them with a custom stack.
About the whole crazy idea:
In case of interactive physics, latency can be a problem and you will need some sort of position extrapolation on the client side. Worst case is if you have multiply clients. Imagine that one client caused some physics event then it will send data to the server and server will send it to the another client. So you will have effectively double latency and at the end it will be really hard to resolve inconsistency when you have 3 states (or even more if you have more clients) one unique state at each client and one actual state at server. The bigger the latency - the higher inconsistency. And physics is really sensitive to this kind of inconsistency, because it tends to snowball. So your clients likely to see some weird popping in and out of existence, teleportations, falling through solid objects.
Implemented this successfully by doing the following:
Within my C code I used the hiredis library to publish a JSON formatted string to redis, hosted on an ec2 machine with nginx. Next I used primus(a node_module) to set up a subscriber websocket from redis to a node.js app server. Node then serves the data(whenever there's a publish) to the client, where I parsed out the JSON and used the object to draw my scene.
This solution is really quick and easy, and is a very effective means of pushing large amounts of data(in the form of a JSON object) through to many clients. It also allows for setting up multiple channels within C(for different datasets), at which point you can pick and choose which channels you want to listen to on the client side(or listen to all of them at once!). I'm away from my computer with the code, but I can post more detailed instructions/code examples on how to do this at a later time.

What is the best way I can scale my nodejs app?

The basics
Right now a few of my friends and I are trying to develope a browser game made in nodejs. It's a multiplayer top-down shooter, and most of both the client-side and server-side code is in javascript. We have a good general direction that we'd like to go in, and we're having a lot of fun developing the game. One of our goals when making this game was to make it as hard as possible to cheat. Do do that, we have all of the game logic handled server-side. The client only sends their input the the server via web socket, and the server updates the client (also web socket) with what is happening in the game. Here's the start of our problem.
All of the server side math is getting pretty hefty, and we're finding that we need to scale in some way to handle anything more than 10 players (we want to be able to host many more). At first we had figured that we could just scale vertically as we needed to, but since nodejs is single threaded, is can only take advantage of one core. This means that getting a beefier server won't help that problem. Our only solution is to scale horizontally.
Why we're asking here
We haven't been able to find any good examples of how to scale out a nodejs game. Our use case is pretty particular, and while we've done our best to do this by ourselves, we could really benefit from outside opinions and advice
Details
We've already put a LOT of thought into how to solve this problem. We've been working on it for over a week. Here's what we have put together so far:
Four types of servers
We're splitting tasks into 4 different 'types' of servers. Each one will have a specific task it completes.
The proxy server
The proxy server would sit at the front of the entire stack, and be the only server directly accessible from the internet (there could potentially be more of these). It would have haproxy on it, and it would route all connections to the web servers. We chose haproxy because of its rich feature set, reliability, and nearly unbeatable speed.
The web server
The web server would receive the web-requests, and serve all web-pages. They would also handle lobby creation/management and game creation/management. To do this, they would tell the game servers what lobbies it has, what users are in that lobby, and info about the game they're going to play. The web servers would then update the game servers about user input, and the game server would update the web servers (who would then update the clients) of what's happening in the game. The web servers would use TCP sockets to communicate with the game servers about any type of management, and they would use UDP sockets when communicating about game updates. This would all be done with nodejs.
The game server
The game server would handle all the game math and variable updates about the game. The game servers also communicate with the db servers to record cool stats about players in game. This would be done with nodejs.
The db server
The db server would host the database. This part actually turned out to be the easiest since we found rethinkdb, the coolest db ever. This scales easily, and oddly enough, turned out to be the easiest part of scaling our application.
Some other details
If you're having trouble getting your head around our whole getup, look at this, it's a semi-accurate chart of how we think we'll scale.
If you're just curious, or think it might be helpful to look at our game, it's currently hosted in it's un-scaled state here.
Some things we don't want
We don't want to use the cluster module of nodejs. It isn't stable (said here), and it doesn't scale to other servers, only other processors. We'd like to just take the leap to horizontal scaling.
Our question, summed up
We hope we're going in the right direction, and we've done our homework, but we're not certain. We could certainly take a few tips on how to do this the right way.
Thanks
I realize that this is a pretty long question, and making a well thought out answer will not be easy, but I would really appreciate it.
Thanks!!
Following my spontaneous thoughts on your case:
Multicore usage
node.js can scale with multiple cores as well. How, you can read for example here (or just think about it: You have one thread/process running on one core, what do you need to use multiple cores? Multiple threads or multiple processes. Push work from main thread to other threads or processes and you are done).
I personally would say it is childish to develop an application, which does not make use of multiple cores. If you make use of some background processes, ok, but if you until now only do work in the node.js main event loop, you should definitely invest some time to make the app scalable over cores.
Implementing something like IPC is not that easy by the way. You can do, but if your case is complicated maybe you are good to go with the cluster module. This is obviously not your favorite, but just because something is called "experimental" it does not mean it's trashy. Just give it a try, maybe you can even fix some bugs of the module on the way. It's most likely better to use some broadly used software for complex problems, than invent a new wheel.
You should also (if you do not already) think about (wise) usage of nextTick functionality. This allows the main event loop to pause some cpu intensive task and perform other work in the meanwhile. You can read about it for example here.
General thoughts on computations
You should definitely take a very close look at your algorithms of the game engine. You already noticed that this is your bottleneck right now and actually computations are the most critical part of mostly every game. Scaling does solve this problem in one way, but scaling introduces other problems. Also you cannot throw "scaling" as problem solver on everything and expect every problem to disappear.
Your best bet is to make your game code elegant and fast. Think about how to solve problems efficiently. If you cannot solve something in Javascript efficiently, but the problem can easily be extracted, why not write a little C component instead? This counts as a separate process as well, which reduces load on your main node.js event loop.
Proxy?
Personally I do not see the advantage of the proxy level right now. You do not seem to expect large amount of users, you therefore won't need to solve problems like CDN solves or whatever... it's okay to think about it, but I would not invest much time there right now.
Technically there is a high chance your webserver software provides proxy functionality anyway. So it is ok to have it on the paper, but I would not plan with dedicated hardware right now.
Epilogue
The rest seems more or less fine to me.
Little late to the game, but take a look here: http://goldfirestudios.com/blog/136/Horizontally-Scaling-Node.js-and-WebSockets-with-Redis
You did not mention anything to do with memory management. As you know, nodejs doesn't share its memory with other processes, so an in-memory database is a must if you want to scale. (Redis, Memcache, etc). You need to setup a publisher & subscriber event on each node to accept incoming requests from redis. This way, you can scale up x nilo amount of servers (infront of your HAProxy) and utilize the data piped from redis.
There is also this node addon: http://blog.varunajayasiri.com/shared-memory-with-nodejs That lets you share memory between processes, but only works under Linux. This will help if you don't want to send data across local processes all the time or have to deal with nodes ipc api.
You can also fork child processes within node for a new v8 isolate to help with expensive cpu bound tasks. For example, players can kill monsters and obtain quite a bit of loot within my action rpg game. I have a child process called LootGenerater, and basically whenever a player kills a monster it sends the game id, mob_id, and user_id to the process via the default IPC api .send. Once the child process receives it, it iterates over the large loot table and manages the items (stores to redis, or whatever) and pipes it back.
This helps free up the event loop greatly, and just one idea I can think of to help you scale. But most importantly you will want to use an in-memory database system and make sure your game code architecture is designed around whatever database system you use. Don't make the mistake I did by now having to re-write everything :)
Hope this helps!
Note: If you do decide to go with Memcache, you will need to utilize another pub/sub system.

Server side javascript with WebGL?

I'm thinking about learning WebGL and the first thing that comes to mind is that JavaScript is client-side; what approach (if any) is used to have server-side JavaScript specifically related to WebGL?
I am new to WebGL as well, but I still feel that this is a very advanced question you are asking. I believe it is an advanced question because of the scope of answers that exist to do what you are asking and the current problems related to proprietary WebGL.
If you have done any research into WebGL you will immediately see the need for Server Side code due to the fact that the WEbGL API code is executed within the browser and thus freely available to any knowing individual. This is not a typical circumstance for game developers who are used to shipping their code compiled.
By making use of server side controls a developer can hide a large amount of WebGL translations, shaders, and matrices, and still maintain a level of information hiding on the client side code. However, the game will never work without an active internet connection.
Since WebGL is still relatively new, and IE does not support it, expect things to change. M$ may decide that they want to build their own web API like WebGL that ends up being an ASP.NET library. All of the required complexity that currently goes into building a solution to the problem you are facing gets condensed into a 3 button Wizard.
With that being said I think the answer to your question lies in the fate of some future technologies. For bigger goals there will more than likely be a large amount of back and forth communication; protocols like HTTP may not cut it. WebSockets or other similar technologies may be worth looking into. If you are attempting to use Canvas for something smaller just an understanding of building dynamic JavaScript may be enough.
The problem with these answers is that OpenGL is an API itself and has a specific order of operations that is not meant to be changed. This means that this approach to building a WebGL applications is very limited. Since changing GL objects may require a whole Canvas restart, a page refresh, or a new page request. This could result in effects not desirable. For now I would say aim low, but ones thing for sure WebGL is going to change the www as we web developers know it.
I'm not sure what you are looking for, probably not this... :)
but...
If you want a server side fallback for browsers not supporting WebGL, lets say for generating fixed frames as png images of some 3D scene, then you could write your 3D veiwer in C or C++, build it for OpenGL ES when targeting your server side fallback, and use Emscripten to target browsers supporting WebGL.

Client or server-side HTML5 canvas rendering for a node.js whiteboard application?

I was thinking a little whiteboard web app would be a nice way to improve my node.js and JavaScript skills. I've seen a few on the web, which makes sense as it seems ideal for this kind of stack.
Just taking a moment to think, however, I was wondering about the roles of both client and server in this kind of web application. Stumbling upon node-canvas, I became even more confused. What, specifically, should the client and server be responsible for?
If the server is capable of rendering to a canvas, should it accept and validate input from the clients and then broadcast it to all other connected users via socket.io? This way, the server keeps a master-canvas element of sorts. Once a new user connects, the server just has to push out its canvas that client - bringing it up to pace with whatever has been drawn.
Any guidance on implementation - specific or philosophical - is appreciated.
Thanks!
I wrote http://draw.2x.io, which uses node-canvas (previously node-cairo, which I wrote myself) along with socket.io.
The way I've designed my application, the client essentially does all the stroke generation from user input. These are in turn processed by a canvas abstraction, which supports a subset of operations and parameters which I've defined myself. If this layer accepts whatever input the painting modules produce, they are also shipped, via socket.io, to the server.
On the server I've got the same kind of canvas layer wrapping node-canvas. This will thus replicate the input from the user in memory there, eventually making it possible to send a state image to new clients. Following this, the strokes will -- pending parameter / context validation by the server application -- be published to other connected clients, which will repeat the same procedure as above.
A company I work for implemented a whiteboard app with node.js (but did not use node-canvas) and socket.io. Unfortunately, I cannot give you code or even a website since it has not been released.
Your implementation seems very similar. Clients connect to our server and update the server whenever the whiteboard is drawn to (JSON data w/(x,y) coordinates) through socket.io. The server then updates the rest of the clients and keeps a copy of all the (x,y) coordinates so that new clients who join can see what has already been drawn.
Good luck with your app. I've been programming with node.js a lot lately and boy do I love it.
here's a multiuser whiteboard tutorial written in javascript/html5, all source available:
http://www.unionplatform.com/?page_id=2762
it's not node on the server-side, but the client-side code should still be useful if you want to adapt it to a node backend.

Categories