I'm thinking about learning WebGL and the first thing that comes to mind is that JavaScript is client-side; what approach (if any) is used to have server-side JavaScript specifically related to WebGL?
I am new to WebGL as well, but I still feel that this is a very advanced question you are asking. I believe it is an advanced question because of the scope of answers that exist to do what you are asking and the current problems related to proprietary WebGL.
If you have done any research into WebGL you will immediately see the need for Server Side code due to the fact that the WEbGL API code is executed within the browser and thus freely available to any knowing individual. This is not a typical circumstance for game developers who are used to shipping their code compiled.
By making use of server side controls a developer can hide a large amount of WebGL translations, shaders, and matrices, and still maintain a level of information hiding on the client side code. However, the game will never work without an active internet connection.
Since WebGL is still relatively new, and IE does not support it, expect things to change. M$ may decide that they want to build their own web API like WebGL that ends up being an ASP.NET library. All of the required complexity that currently goes into building a solution to the problem you are facing gets condensed into a 3 button Wizard.
With that being said I think the answer to your question lies in the fate of some future technologies. For bigger goals there will more than likely be a large amount of back and forth communication; protocols like HTTP may not cut it. WebSockets or other similar technologies may be worth looking into. If you are attempting to use Canvas for something smaller just an understanding of building dynamic JavaScript may be enough.
The problem with these answers is that OpenGL is an API itself and has a specific order of operations that is not meant to be changed. This means that this approach to building a WebGL applications is very limited. Since changing GL objects may require a whole Canvas restart, a page refresh, or a new page request. This could result in effects not desirable. For now I would say aim low, but ones thing for sure WebGL is going to change the www as we web developers know it.
I'm not sure what you are looking for, probably not this... :)
but...
If you want a server side fallback for browsers not supporting WebGL, lets say for generating fixed frames as png images of some 3D scene, then you could write your 3D veiwer in C or C++, build it for OpenGL ES when targeting your server side fallback, and use Emscripten to target browsers supporting WebGL.
Related
I am trying to understand why it's a hard task for browsers to fully render the DOM many time per second, like game-engines do for their canvas. Games engines can perform many many calculation each frame, calculating light, shadows, physics etc`, and still keep a seamless frame rate.
Why browsers can't do the same, allowing full re-rendering of the DOM many times per second seamlessly?
I understand that rendering a DOM and rendering a Game scene are two completely different tasks, but I don't understand why the later is so much harder in terms of performance.
Please try to focus on specific aspects of rendering a DOM, and explain why games-engines don't face the same problems. For example- "browsers need to parse the HTML, while all the code of the game is pre-compiled and ready to run".
EDIT: I edited my question because it was marked as opinionated. I am not asking for opinions here, only facts. I am asking why browsers can't fully re-render the DOM 60 frames per second like game-engines render their canvas. I understand that browsers faces a more difficult task, but I don't understand why exactly. Please stick with informative answers only, and avoid opinions.
Games are programs written to do operations specific to themselves - they are written in low level languages asm/c/c++ or at least languages that have access to machine level operations. When it comes to graphics, games are able to push programs into the graphics cards for rendering: drawing vectors and colouring / rasterization
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Rasterisation#:~:text=Rasterisation%20(or%20rasterization)%20is%20the,which%20was%20represented%20via%20shapes)
they also have optimised memory, cpu usage, and IO.
Browsers on the other hand are applications, that have many requirements.
Primarily designed to render HTML documents, via the creation of objects which represent the html elements. Browsers have got a more complex job, as they support multiple version of the dom and document types (DTD), and associated security required by each DTD.
https://en.wikipedia.org/wiki/Document_type_declaration#:~:text=A%20document%20type%20declaration%2C%20or,of%20HTML%202.0%20%2D%204.0).
and have to support rending a very generic set of documents - one page is not the same as another. Have to have libraries for IO, CSS parsing, image parsing (JPEG, PNG, BMP etc.....) and movie players and associated codecs, audio players and their codecs, and web cams support. Additionally they support the JavaScript code environment (not just the language - but IO and event handling) - also have historic support for COM, Java Applets.
This makes them very versatile tools, but heavy weighted - they carry a lot of baggage.
The graphic aspects can never be quite as performant as a dedicated program in this aspect, as the API they provide for such operations is always running at a higher level.
Even the Canvas API (as the name suggests) is a layer of abstraction above the lower level rendering libraries. and each layer of abstraction adds a performance hit.
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API
For a better graphics performance there is now a new standard available in browsers call webGL - though this is still an API, and runs in a sandbox - so still will not be as performant as dedicated code
https://en.wikipedia.org/wiki/WebGL
Even games using game engines: Unity, Unreal will be accessing graphical features, CPU, memory, and IO in much more a dedicated fashion then browsers would - as the game engines themselves provide dedicated rendering and rasterization functions, that the developer can use in their games for optimised graphical features.. Browser cant as they have to cover many generic cases, but not specific requirements.
https://docs.unrealengine.com/en-US/Engine/index.html
https://learn.unity.com/tutorial/procedural-sky-19-1
First of all, games on the Web don't use the DOM much. They use the faster Canvas API. The DOM is made for changing content on a document (that's what the D in DOM stands for), so it is a really bad fit for games.
How is it possible that my crappy phone can run Call Of Duty seamlessly, but it's so hard to write a big webpage that will run smoothly on it?
I never had performance problems with the DOM. Of course, if you update the whole <body> with a single .innerHTML assignment 60 times a second, I wouldn't be surprised if the performance is bad, because the browser needs to:
Parse the HTML and construct the DOM tree;
Apply styles and calculate the position of each element;
Render the elements.
Each of those steps is a lot of work for the CPU, and the process is mostly single-threaded in most browsers.
You can improve the performance by:
Never using .innerHTML. .innerHTML makes the browser transform HTML into a DOM tree and vice-versa. Use document.createElement() and .appendNode().
Avoid changing the DOM. Change only the CSS styles, if possible.
Generally , it's depend about the game . the most powerful games are developed in C++ or C engine , so they are directly in touch with the memory and use the full power of processor.
Instead to web pages based on DOM , they are wrote it by interpreted language like JavaScript. Also , the problem can be from the server if the webpage it's deployed not correctly or in a bad slow server .
I'm learning React.
I'd like to create a game with a basic tile-board (like here http://richard.to/projects/beersweeper/ but where tiles can have two states ('available' or 'already clicked').
In terms of speed, React looks interesting as with its virtual DOM/diffs, I could only adjust the css and text inside tiles that have been clicked (so that they visually differ form those still not clicked by anyone).
My goal (and personal challenge haha) is to make this game playable by 1000 simultaneous players who can click where they want on a 100,000-tiles board.(distribution among clients in real time of tile status would be done with Firebase)
Should I use basic standard React and its built-in features (onclick events,ts listeners...) or is it impossible with only-React to enable that many events/listeners for 1000 people on 100K tiles in real time with any user being able to click anywhere (on available tiles) ?
Should I use alternative/complentary tools and techniques such as canvas, React Art, GPU acceleration, webgl, texture atlases....?
WebGL is the right answer. It's also quite complicated to work with.
But depending on the size of the tiles, React could work but you can't render 100k dom nodes performantly... no matter how you do it. Instead, you need to render the subset of tiles visible to the user.
To pull something like this off, you'll need to have a lot of optimized code, and firebase most likely won't be up to par. I'd recommend a binary protocol over websockets and a database that makes sense (fast lookups on multiple numeric index ranges, and subscriptions).
Ultimately, I'd probably go with:
webgl (compare three.js and pixi.js)
custom data server in golang (with persistence/fallback managed by a mysql engine like maria or aws's aurora)
websocket server written in golang
websockets (no wrapper library, binary protocol)
The only reason for golang over node.js for the websocket server is CPU performance, which means lower latency and more clients per server. They perform about the same for the network aspect.
You'll probably ignore most of this, but just understand that if you do have performance problems, switching some of the parts out for these will help.
Do a prototype that handles 2 concurrent users and 1000 tiles, and go from there. In order of priority:
don't render 100k dom nodes!
webgl instead of dom
binary websocket protocol, over socket.io or similar (not firebase)
custom data server in go
binary websocket protocol not using socket.io (e.g. ws package in node)
websocket server in go (not really that important, maybe never)
Lots of people use React as the V in MVC.
I believe that react is fine for UI but you should ask yourself what will be the server side logic, you still have to think about M and C
If you are looking for 1000 simultaneous users load, the keyword scalable will be your friend.
Also you should check out Node.js for the server side service. Express.js for it's fast implementation, and finally Pomelo.js which is a js game server implementation, based on node.js
On the subject of performance.. WebGL will most likely boost your performance. Here you can grab a nice tutorial on the topic : https://github.com/alexmackey/IntroToWebGLWithThreeJS
If you want to build it without GL languages whatsoever, you should digg deeper into JavaScript create your own pseudo-class library with dynamic data bindings. Otherwise you might end up using small percentage of a powerful framework that will only slow down your API.
I would restrain from using canvas, as they are good for model visualization rather as game front-end. Checkout d3.js for it's awesomeness and unfortunately performance issues.
Here I have written a nice fiddle, that creates 100x100 matrix with hovers, and perfromance is not so good. You can easly tweak it to get 100k element matrix: https://jsfiddle.net/urahara/zL0fxyn3/2/
EDIT: WebGL is the only reasonable solution.
I came across this proof of concept earlier today (on TechCrunch.com) and was blown away and intrigued as to how they had managed to accomplish the end result. They state that they don't use webGL or any plugins yet they are able to interact directly with the GPU and render 3D visuals at up to 60 fps using nothing but Javascript. Any ideas how this could be done, or how to access the GPU from Javascript in general without the use of plugins?
Site Address is: famo.us
ps: Try using your arrow keys to shift orientation its far out!!
They're using standard HTML5 Javascript APIs to achieve this.
I saw several references to requestAnimationFrame in their code. This method allows one that uses a compatible browser, to display more fluid and optimized animations, with far better frame rates than setInterval based animations will ever allow you to have. This is certainly achieved by using the GPU and the available hardware background.
GPU or every other hardware component cannot be accessed directly using any Javascript routines. Instead, the browser, based on the called JS directives and the execution context, will use the GPU if possible to optimize some specific treatments, calculus and renderings.
EDIT
For future references, I recently found out (7 years after the original answer) that a new initiative - the W3C GPU for the Web Community Group, has recently been created in 2020 to do just that. It describes itself as follow.
The mission of the GPU on the Web Community Group is to provide an interface between the Web Platform and modern 3D graphics
and computation capabilities present in native system platforms. The
goal is to design a new Web API that exposes these modern technologies
in a performant, powerful and safe manner. It should work with
existing platform APIs such as Direct3D 12 from Microsoft, Metal from
Apple, and Vulkan from the Khronos Group. This API will also expose
the generic computational facilities available in today's GPUs to the
Web, and investigate shader languages to produce a cross-platform
solution.
In the long-term, this initiative might allow developers to directly interact with the GPU from all web browsers. You can track the implementation status of the WebGPU API Spec on Github here.
Concerning famo.us: they did analysed the bottlenecks of the Webkit rendering pipe and then found a way to bypass them while building the pages. Basically: the DOM tree construction, the Render tree construction, the Layout of Render Tree are bypassed. Take a look on this article for a whole explanation.
They're using CSS 3D transforms. Web browsers are increasingly using hardware acceleration to do these kinds of CSS3 things.
i think the webGL glsl.js library might be good for this, though i havnt seen benchmarks...
https://github.com/gre/glsl.js/
also this approach seems viable;
basically to use the gpu in the way we like to, hardware optimised functions (have a little look into "blas") are used, you do not want to write these! strangely it seems that people still use the old fortran blas.... there is some stuff with compiling via llvm and then using emscripten to turn it into javascript.
Use Emscripten with Fortran: LAPACK binding
the emscripten way seems the most versatile, im just about to check it out, but it looks like a mountain, also this topic seems to be somewhat of a call to arms, emscripten only works with fortran if you hack it (see links from the second link). i think what we are looking for is blas support in javascript, this is not a closed issue by any means, and for a few of us out here it is very frustrating! hopefully someone has a link to these blas libraries we cant find...
please let me know if i dont quite have my head round this issue, js is new to me.
also, suggesting html5 is sufficiently "optimised" to use the hardware, how can we trust this!? we need gpu optimised blas (basic linear algebra subprograms) e.g. dot product.
i think maybe also that these old fortran blas arnt actually the right thing for a modern gpu? node-cuda seems very likely the best thing i have found...
I've been developing in the .NET stack for the past five years and with the latest release of MVC3 and .NET 4.0 I feel like the direction I thought things were headed in were further confirmed.
With the innovative steps the client community has in such a short period of time, it seems like the best in class apps have a UX controlled by a majority of client events. For example, facebook.com, stackoverflow.com, google, www.ponched.com :), etc. When I say client events I am not talking about a server side control wrapped in an UpdatePanel to mask the postback. I am talking about doing all events and screen transitions on the client and use full postbacks only when really necessary. That's not to say things like .NET aren't essential tools to help control security, initial page load, routing, middle tier, etc.
I understand when working on a simple application or under aggressive time constraints using the controls and functionality provided by default by .NET (or other web dev frameworks) isn't practical if the project calls for it but it seems like the developers that can set themselves apart are the ones you can get into the trenches of Javascript/jQuery and provide seamless applications that have limited involvement from the (web) server. As developers we may not think our users are getting more sophisticated because of the big name web applications they are using on the reg but I am inclined to think they are.
I am curious if anyone shares this view or if they have another take on this? Some post lunch thoughts I figured I'd fire out there and see what I got back.
I share this view. We've ironically moving away from thin client and back to thick client, although this time around everything on the client is distributed on-demand via the server so obviously the maintenance overhead is nothing like it used to be.
Rich client-side functionality not only gives you fluid, responsive, interactive apps, but has the significant advantage for large scale sites and applications of being able to palm off a large chunk of processing resources to client browsers rather than having to process everything at their end. Where tens or hundreds of millions of users are involved, this amounts to a very large saving.
I could say more on the matter but time is short. I'm sure there will be other views (assuming the question isn't closed for being subjective).
The point about the developers who set themselves apart is definitely on target. Developers who understand the underlying technologies and can produce customized solutions for their clients are indeed set apart from developers who can drag and drop framework tools and wire up something that works well enough.
Focusing on web development in this discussion, it's vitally important that developers understand the key technologies in play. I can't count how many times I've encountered "web developers" (primarily in the Microsoft stack, since that's where I primarily work) who patently refuse to learn JavaScript/HTML/CSS simply because they feel that the tooling available to them in Visual Studio does the job just fine.
In many cases it does, but not in all cases. And being able to address the cases where it doesn't is what puts a developer above the rest. Something as simple as exposing a small RESTful JSON API and using AJAX calls to fetch just the necessary data instead of POSTing an entire page and re-processing the entire response means a lot to the overall user experience. Both get the job done, but one is considerably more impressive to the users than the other.
Frameworks are great when what you want to do is fully encapsulated in the feature set of the framework. But when you need to grow beyond the framework, it ends up being limiting. That's where a deeper understanding of the underlying technologies would allow a developer to grow outside of the framework tooling and produce a complete solution to the client.
You are right in saying that modern web development involves technologies like jQuery (or similar libraries) and JavaScript in general.
Full page reloads are old fashion and Ajax approach is the way to go, just don't think that the web server is less used or involved than before, it still responds to the ajax calls simply it does it asynchronously :)
MVC does not support any post back actually, because there are no web forms and there is not the same model of page life cycle.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I don't know if anyone thought about this but are games like World of Warcraft, Lineage II, or Aion, feasible with a browser front-end through WebGL? What are the things I would have to consider if I want to make a full fledged game with these new technologies? Also, what would be a good place to start?
This may be too open-ended, but I will take a stab.
First, there is no modeling programs that will output what you expect, as far as I know, since you will need javascript outputted.
Some browsers will use the hardware to accelerate the graphics, but that isn't a guarantee, and your only getting a bit of the cpu, sharing with the other tabs, so it may not be as smooth as you like.
If you have to download a large amount of data to run your program that will be a problem for the user.
I think the modeling program is the real challenge though, as you will have to basically do everything by hand, and the fact that it won't be very smooth will be an issue, unless you design for this.
But, for some game designs WebGL should be a fantastic choice.
I don't believe it's possible if your game must go beyond some cubes on heigtmaps.
Large amounts of coding in JS multiplied by browsers quirks. (Yes, I'm aware of JQuery, but it's not panacea)
Large resources hanging on tiny thread of browser cache
Ready-to-be-hacked client code exposed to a lot of browser tools like Firebug
Such game is much more realistic on Flash, especially with upcoming 11 version of player with hardware 3D.
In fact it is fully possible, and we will se such games.
We can expect libraries like O3D to take care of the browser quirks. We already have these problems on desktop platforms and libraries takes care of multi-platform portability there.
Browser cache can be a slight problem, but not a big one. It is possible to assign more cache to games, and we also have proxy servers like squid that can cache very large resources. If a group of players on a LAN share a proxy server they will also share large resource objects, if the game are well designed (ie the resource cannot have multiple generated names, but be have a common URL for all players.)
Also there are discussions about adding local storage possibilities for web applications.
And "ready to be hacked" is not a mayor issue. There are nothing to stop hackers from manipulating Flash or C++ applications, anti-cheating tools are already rendered useless. Blizzard is already relying on spotting "bot-like behavour" rather then try more anti-hacking measures.
However, I do not think that WoW will be the first flash-based games. In fact it will be Quake (http://playwebgl.com/games/quake-2-webgl/) as there is already a Quake-port for WebGL... There will be web games that makes use of WebGL, but do not count on Blizzard supporting it in the near future.
IE is the only browser that does not support WebGL and to be honest that does not matter. All other browsers do, and users will not mind running Chrome or Firefox. Or running both and choose the one that is faster for their game.
Who cares of marginalized browsers like IE and Opera. They are equally unimportant. Unless you count IE6 which will never support any of the stuff we are discussing, as it is discontinued and unsupported.
For caching local files, you should look into the File System APIs that are now in Chrome. This gives you programmatic access to a virtual file system, letting you control what resources you store locally.
The Application Cache can help you with static resources like the HTML, CSS, and JavaScript required for the game. However, you need to run as an "installed web app" (via the Chrome Web Store, for example) to get unlimited storage. Browsers are building quota management systems to help make this easier.
WebGL is great, and the libraries are emerging to help make it easier. There's no clear "winner" but lots of options.
JavaScript is pretty fast these days, thanks to improvements like CrankShaft. For even better performance, you can use Native Client to run your C/C++ code and post messages back and forth to JavaScript.
There are two big issues that I can see. One is helping the middleware companies port their work to JavaScript or Native Client. The second is improving the speed with which we can move data from JavaScript into WebGL.
Runescape one of the most played browser games for many years is rewriting their engine in with WebGL... (They currently use Java applets)
"If you can find a way to minimize the cost of transporting massive amounts (possibly gigs) of resources"
Actually http already has the minimal cost of transporting gigs of static resources. With its native resource allocation scheme, the URL, it has the ultimate caching abilities. Not only does browsers know how to cache static resources by URL, but fast and efficient proxy servers exists that can handle terrabyte of data.
The main secret to this is the HTTP HEAD requeset, where the browser of proxy server efficiently can check if it has the latest version of the resource and re-syncronize it. Also it is possible trough HTTP headers to mark a resource as eternal or very long-living (immutable). re-syncronization will then be impossible, instead updates will be done by creating a new resource with a new name.
There is a myth that HTTP is somehow inefficient as a resource transport system, when in fact it have been designed to be very efficient.
WoW and other clients that use proprietary protocol are notoriously inefficient compared to HTTP-based clients. These client cannot be accelerated using proxy servers. Windows update, Apt and Yum all have in common that they update OS resources with HTTP and have been able to leverage Akamai:s vast global networks of proxy servers among other similar resources in order to efficiently transfer URL resources in the scale of many gigabytes per client.