I'm trying to port a very complex 3D modeling program(written in C) to webGL. The program has it's own physics engine written from scratch, and I would like to use the transform data which is output by the physics engine as matrices to transform objects rendered in a webPage.
The program is so massive that I would like to keep the physics engine in C as is, but take the graphics part into the browser.
My crazy idea is to have the physics engine running constantly on the server, and then stream the transformation matrices to the client and apply the transformations to pre-rendered WebGL objects.
Is this possible to do?
Clarification: The program is a viewer, so all of the physics backend is isolated from user input. The user will, however, be able to manipulate camera angles on the client side.
Update: I've decided on implementing the following solution, let me know if any of this is wrong: I will host the C program as a daemon using node.js, and pump data using websockets to the front end - which is pixi.js(for 2D elements) and babylon.js(or three.js)(for 3D elements). The data will be comprised of JSON objects(quaternions or sine/cosine matricies) that will be handled on the front end in javascript and applied once per second(fps doesn't matter in my situation, so it's okay)
Push and pop matrix are auxiliary (not a core part of the rendering pipeline) so you can replicate them with a custom stack.
About the whole crazy idea:
In case of interactive physics, latency can be a problem and you will need some sort of position extrapolation on the client side. Worst case is if you have multiply clients. Imagine that one client caused some physics event then it will send data to the server and server will send it to the another client. So you will have effectively double latency and at the end it will be really hard to resolve inconsistency when you have 3 states (or even more if you have more clients) one unique state at each client and one actual state at server. The bigger the latency - the higher inconsistency. And physics is really sensitive to this kind of inconsistency, because it tends to snowball. So your clients likely to see some weird popping in and out of existence, teleportations, falling through solid objects.
Implemented this successfully by doing the following:
Within my C code I used the hiredis library to publish a JSON formatted string to redis, hosted on an ec2 machine with nginx. Next I used primus(a node_module) to set up a subscriber websocket from redis to a node.js app server. Node then serves the data(whenever there's a publish) to the client, where I parsed out the JSON and used the object to draw my scene.
This solution is really quick and easy, and is a very effective means of pushing large amounts of data(in the form of a JSON object) through to many clients. It also allows for setting up multiple channels within C(for different datasets), at which point you can pick and choose which channels you want to listen to on the client side(or listen to all of them at once!). I'm away from my computer with the code, but I can post more detailed instructions/code examples on how to do this at a later time.
Related
I am currently coding my offline game into an online game by use of node.js and socket.io
In my game, I use vectors from a library called p5.js to store position of player, collision related movement, etc.
However, the server side (a txt file called "server.js") does not have p5.js like the client, so I can't send information about the player's with vectors.
Here is my question: How could I make the server.js file have access to my p5.js library?
Note: Simply sending x and y values, and then using them to make a vector would be a difficult solution, as I would no longer be able to send a single array holding all the information of all players. Also, enemies, food, trail positions, and much more also depend on vectors. Coverting all of these would be difficult to code.
What you want to do is simple, but it won't feel simple until you fully understand all the parts involved, and for that I'm afraid it's going to take you at least a few months given your current level.
How could I make the server.js file have access to my p5.js library?
Is your p5.js a browser-only library, or you can import it as a module in your server? If it's the second option, all you have to do for your server to access it is:
const p5 = require('p5.js');
Keep in mind that:
Server should handle the position, movement, and actions of players so they can't cheat.
The client should be just a visual display of the information coming from the server, plus sending the player key inputs to the server.
So unless you want to make client side prediction and entity interpolation, which I doubt because you are starting out, keep it simple. No need to share libraries between client and server yet.
So, suppose there was a game which consisted of a website, and a client that you can launch from said website. I've looked a bit and a relatable example would be Habbo Hotel.
What I'm asking is, what are all the different parts that would make such a game work: for the website part, I'd imagine a server, a database, and some HTML, CSS and PhP coding would be required, but how would the client side operate?
More specifically, how would the client-to-server (and vice versa) real-time communications happen?
Suppose the client be coded in C, how would the integration of C into a (I suppose PhP-framed) browser window be executed?
Note that the client is never downloaded on the user's PC, so where would it reside?
I'm sorry if these are a lot of questions, if the answers were to be too tedious to compose, feel free to just leave some documentation or tutorials (which I've looked for but haven't really been able to find), I'll happily read them on my own. Thanks in advance.
On one side your question is too broad but on the other side I can give you some pointers of how to do this in a modern way:
don't have a client, just a page in a browser
use HTML5 canvas, you may also want to look into SPA (single page application)
connect via websocket, there are HTML5 javascript implementations and PHP or node.js for the server-side
best is, use node.js on the server, PHP would be way too cumbersome
via websocket, send and receive JSON objects
host node.js on its native platform (Linux)
you may want to look into phaser as an HTML5 client-side canvas rendering framework, but it lacks many functionality and is mainly oriented towards twitch-based action games, which don't work well with this architecture because of lag
this will lead you to one conclusion: javascript is at the center of this system. you'll encounter several roadblocks, such as:
security on websockets with SSL for login
avoid SSL for real-time data (above 1 Hz)
UI on the client inside the canvas is not easy to implement, you'll have to re-invent the wheel or find a UI library for that
expect lag, the network code will take some 20%-30% overhead in respect to native C/C# code using TCP/IP or UDP (Lidgren?) and protobuf (Lidgren+protobuf) is what high-frequency AAA titles use (MMORPG or FPS alike)
From the questions you asked I sense a great lack of understanding and knowledge about the field. I guess you'll have to study some 6-12+ months beforehand. This is what I recommend, because if you start right away you'll make a lot of errors and waste your time. If above are any names you don't know about, search them and study them well. And don't start to code, there is a very steep learning curve ahead of you!
EDIT: more about server-side
If you want to implement an action-based interactive game (like an FPS or 2D shooter) I have to tell you this.
You may want to look into Unity 3D, using directly TCP/IP connections and binary messages (no HTTP, no websocket, instead protobuf).
C# (client-side) and node.js (server-side) are a good combination. For horizontal scaling you may want to look into cloud computing, docker, provisioning and a lot of server security.
But this is hostile terrain, it leads you into DevOps territory, which is way off game development. More like an architect's job. Imagine that 3-tier system (client + server + database) has a bottleneck on the server.
You want to spawn more nodes to handle more clients. This is what EVERY lobby-based game does (LoL, Overwatch, WoT, WoW instances, and so on) and what you do for partitioned maps (e.G. the "maps" in LOTRO, RIFT, many more MMORPGS). Also, to mirror (which means multiple instances of the same map to accomodate an overpopulated crowd).
To have this kind of horizontal scaling your servers must go online/offline on their own without you clicking around on command and control (e.G. puppet and similar software).
While this is the ultimate approach, it also has the most steep learning curve, especially because of security (advert DDOS, flooding, slow-loris, fake clients, and the list goes on). A new node must be "provisioned" on the fly (e.G. cloud-config) before it attaches to the cluster and goes online, so there's a whole new world of pain and learning.
The center-piece of such an elastic cloud-based server system is SSO (single sign-on).
I'm learning React.
I'd like to create a game with a basic tile-board (like here http://richard.to/projects/beersweeper/ but where tiles can have two states ('available' or 'already clicked').
In terms of speed, React looks interesting as with its virtual DOM/diffs, I could only adjust the css and text inside tiles that have been clicked (so that they visually differ form those still not clicked by anyone).
My goal (and personal challenge haha) is to make this game playable by 1000 simultaneous players who can click where they want on a 100,000-tiles board.(distribution among clients in real time of tile status would be done with Firebase)
Should I use basic standard React and its built-in features (onclick events,ts listeners...) or is it impossible with only-React to enable that many events/listeners for 1000 people on 100K tiles in real time with any user being able to click anywhere (on available tiles) ?
Should I use alternative/complentary tools and techniques such as canvas, React Art, GPU acceleration, webgl, texture atlases....?
WebGL is the right answer. It's also quite complicated to work with.
But depending on the size of the tiles, React could work but you can't render 100k dom nodes performantly... no matter how you do it. Instead, you need to render the subset of tiles visible to the user.
To pull something like this off, you'll need to have a lot of optimized code, and firebase most likely won't be up to par. I'd recommend a binary protocol over websockets and a database that makes sense (fast lookups on multiple numeric index ranges, and subscriptions).
Ultimately, I'd probably go with:
webgl (compare three.js and pixi.js)
custom data server in golang (with persistence/fallback managed by a mysql engine like maria or aws's aurora)
websocket server written in golang
websockets (no wrapper library, binary protocol)
The only reason for golang over node.js for the websocket server is CPU performance, which means lower latency and more clients per server. They perform about the same for the network aspect.
You'll probably ignore most of this, but just understand that if you do have performance problems, switching some of the parts out for these will help.
Do a prototype that handles 2 concurrent users and 1000 tiles, and go from there. In order of priority:
don't render 100k dom nodes!
webgl instead of dom
binary websocket protocol, over socket.io or similar (not firebase)
custom data server in go
binary websocket protocol not using socket.io (e.g. ws package in node)
websocket server in go (not really that important, maybe never)
Lots of people use React as the V in MVC.
I believe that react is fine for UI but you should ask yourself what will be the server side logic, you still have to think about M and C
If you are looking for 1000 simultaneous users load, the keyword scalable will be your friend.
Also you should check out Node.js for the server side service. Express.js for it's fast implementation, and finally Pomelo.js which is a js game server implementation, based on node.js
On the subject of performance.. WebGL will most likely boost your performance. Here you can grab a nice tutorial on the topic : https://github.com/alexmackey/IntroToWebGLWithThreeJS
If you want to build it without GL languages whatsoever, you should digg deeper into JavaScript create your own pseudo-class library with dynamic data bindings. Otherwise you might end up using small percentage of a powerful framework that will only slow down your API.
I would restrain from using canvas, as they are good for model visualization rather as game front-end. Checkout d3.js for it's awesomeness and unfortunately performance issues.
Here I have written a nice fiddle, that creates 100x100 matrix with hovers, and perfromance is not so good. You can easly tweak it to get 100k element matrix: https://jsfiddle.net/urahara/zL0fxyn3/2/
EDIT: WebGL is the only reasonable solution.
I was thinking a little whiteboard web app would be a nice way to improve my node.js and JavaScript skills. I've seen a few on the web, which makes sense as it seems ideal for this kind of stack.
Just taking a moment to think, however, I was wondering about the roles of both client and server in this kind of web application. Stumbling upon node-canvas, I became even more confused. What, specifically, should the client and server be responsible for?
If the server is capable of rendering to a canvas, should it accept and validate input from the clients and then broadcast it to all other connected users via socket.io? This way, the server keeps a master-canvas element of sorts. Once a new user connects, the server just has to push out its canvas that client - bringing it up to pace with whatever has been drawn.
Any guidance on implementation - specific or philosophical - is appreciated.
Thanks!
I wrote http://draw.2x.io, which uses node-canvas (previously node-cairo, which I wrote myself) along with socket.io.
The way I've designed my application, the client essentially does all the stroke generation from user input. These are in turn processed by a canvas abstraction, which supports a subset of operations and parameters which I've defined myself. If this layer accepts whatever input the painting modules produce, they are also shipped, via socket.io, to the server.
On the server I've got the same kind of canvas layer wrapping node-canvas. This will thus replicate the input from the user in memory there, eventually making it possible to send a state image to new clients. Following this, the strokes will -- pending parameter / context validation by the server application -- be published to other connected clients, which will repeat the same procedure as above.
A company I work for implemented a whiteboard app with node.js (but did not use node-canvas) and socket.io. Unfortunately, I cannot give you code or even a website since it has not been released.
Your implementation seems very similar. Clients connect to our server and update the server whenever the whiteboard is drawn to (JSON data w/(x,y) coordinates) through socket.io. The server then updates the rest of the clients and keeps a copy of all the (x,y) coordinates so that new clients who join can see what has already been drawn.
Good luck with your app. I've been programming with node.js a lot lately and boy do I love it.
here's a multiuser whiteboard tutorial written in javascript/html5, all source available:
http://www.unionplatform.com/?page_id=2762
it's not node on the server-side, but the client-side code should still be useful if you want to adapt it to a node backend.
I'm working on a project to learn node.js, and was looking for some suggestions on how to handle syncing user data in real-time.
Say you have a 2D rectangular map (roughly 600x400), with a number of players occupying x,y positions on that map. Each user can navigate around with the arrow keys and interact with the others in some basic way. Given that this would be played over HTTP, what would be the best design pattern in terms of handling and syncing user data to give the smoothest, snappiest experience?
I can think of several options, but would appreciate some more ideas / clarification:
Client sends positional data to server, server distributes all positions to all clients, screen is rendered with the result. Repeat. The downside would be that the client side is lagged by the time it takes for a data round-trip, but the upside is that they are synced with all users.
Client renders where it thinks it is constantly, sends positional data to server, server distributes all positions to all clients, and then screen rendering from client data is corrected with server data. Upside is a snappier response, downside is a slight loss of sync.
A blend of the two, but instead of using (x,y) co-ordinates we use a vector of [previous x/y and time, current x/y and time, suggested x/y at time interval] which can then be used to draw projectile paths that constantly shift. It seems like this would be tricky to implement.
Any pointers?
Most games use some form of dead reckoning http://en.wikipedia.org/wiki/Dead_reckoning which allows for delayed updates distributed from the server but preserves some illusion of real time updates on the client.
The best option is 3. It's not particularly complex - just track where you expect each actor to be based on the mechanics of the game, and when you receive an update from the server you bring the two states in line over time.
If you find the server sends you a state that is too far from the state your client is assuming (too far needs to be defined) then you may just jump to the server state and accept the discontinuity on the client.