I'm doing some research for a JavaScript project where the performance of drawing simple primitives (i.e. lines) is by far the top priority.
The answers to this question provide a great list of JS graphics libraries. While I realize that the choice of browser has a greater impact than the library, I'd like to know whether there are any differences between them, before choosing one.
Has anyone done a performance comparison between any of these?
Updated answer (2019):
The core advice is still the same: for maximal performance use thin wrappers or use raw browser API's, and also avoid the DOM or any DOM-like structure. In 2019 this means avoiding SVG (and any library built on top of it) because it may cause performance issues when trying to rapidly change the DOM.
Canvas is the go-to solution for high performance web graphics, both for the 2d and 3d (webgl) contexts. Flash is dead so no longer an option, but even if it weren't its performance was eventually matched by the native browser API's.
Original answer (2009):
If you're not doing 3d, just use raw canvas with excanvas as an explorer fall-back. Your bottleneck will be javascript execution speed, not line rendering speed. Except for IE, which will bog down when the scene gets too complex because VML actually builds a dom.
If you're really worried about performance though, definitely go with flash and write the whole thing in actionscript. You'll get an order of magnitude better performance, and with the flex sdk you don't even need to buy anything. There are several decent libraries for 3d in flash/flex available.
Raphael JavaScript Library
http://raphaeljs.com
None of them have good performance. It is 2009 and the state of programmable graphics rendering in web browsers is truly depressing. I could do faster interactivity on a vt125 terminal 25 years ago. If you are doing anything interactive, think about using Flash... Else I'd go for some server-side heavy solution depending on your needs
Up to now - is used processing.js (javascript canvas implementation of "Processing" language) and/or raphael.js (tiny and handy VML/SVG javascript library).
The performance recomendations depends on task:
highly dynamic, complex primitives (or huge amount of it) - canvas (pixels, low-level API)
lower amount of primitives, highly scalable, integrated in DOM - SVG/VML (vector, high-level API)
I know you said browser had more influence, so why not stick with using optimized SVG calls? Then, it "could" work in all browsers but...
Chrome is robust enough to do render 3D modeling efficiently:
http://www.chromeexperiments.com/detail/monster/
How about http://www.jsgl.org? It uses SVG/VML.
For basic drawing (such as lines, circles, and polygons), I would recommend Walter Zorn's Graphics Library. It was built with performance in mind and works in a ton of browsers.
Related
I am trying to understand why it's a hard task for browsers to fully render the DOM many time per second, like game-engines do for their canvas. Games engines can perform many many calculation each frame, calculating light, shadows, physics etc`, and still keep a seamless frame rate.
Why browsers can't do the same, allowing full re-rendering of the DOM many times per second seamlessly?
I understand that rendering a DOM and rendering a Game scene are two completely different tasks, but I don't understand why the later is so much harder in terms of performance.
Please try to focus on specific aspects of rendering a DOM, and explain why games-engines don't face the same problems. For example- "browsers need to parse the HTML, while all the code of the game is pre-compiled and ready to run".
EDIT: I edited my question because it was marked as opinionated. I am not asking for opinions here, only facts. I am asking why browsers can't fully re-render the DOM 60 frames per second like game-engines render their canvas. I understand that browsers faces a more difficult task, but I don't understand why exactly. Please stick with informative answers only, and avoid opinions.
Games are programs written to do operations specific to themselves - they are written in low level languages asm/c/c++ or at least languages that have access to machine level operations. When it comes to graphics, games are able to push programs into the graphics cards for rendering: drawing vectors and colouring / rasterization
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Rasterisation#:~:text=Rasterisation%20(or%20rasterization)%20is%20the,which%20was%20represented%20via%20shapes)
they also have optimised memory, cpu usage, and IO.
Browsers on the other hand are applications, that have many requirements.
Primarily designed to render HTML documents, via the creation of objects which represent the html elements. Browsers have got a more complex job, as they support multiple version of the dom and document types (DTD), and associated security required by each DTD.
https://en.wikipedia.org/wiki/Document_type_declaration#:~:text=A%20document%20type%20declaration%2C%20or,of%20HTML%202.0%20%2D%204.0).
and have to support rending a very generic set of documents - one page is not the same as another. Have to have libraries for IO, CSS parsing, image parsing (JPEG, PNG, BMP etc.....) and movie players and associated codecs, audio players and their codecs, and web cams support. Additionally they support the JavaScript code environment (not just the language - but IO and event handling) - also have historic support for COM, Java Applets.
This makes them very versatile tools, but heavy weighted - they carry a lot of baggage.
The graphic aspects can never be quite as performant as a dedicated program in this aspect, as the API they provide for such operations is always running at a higher level.
Even the Canvas API (as the name suggests) is a layer of abstraction above the lower level rendering libraries. and each layer of abstraction adds a performance hit.
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API
For a better graphics performance there is now a new standard available in browsers call webGL - though this is still an API, and runs in a sandbox - so still will not be as performant as dedicated code
https://en.wikipedia.org/wiki/WebGL
Even games using game engines: Unity, Unreal will be accessing graphical features, CPU, memory, and IO in much more a dedicated fashion then browsers would - as the game engines themselves provide dedicated rendering and rasterization functions, that the developer can use in their games for optimised graphical features.. Browser cant as they have to cover many generic cases, but not specific requirements.
https://docs.unrealengine.com/en-US/Engine/index.html
https://learn.unity.com/tutorial/procedural-sky-19-1
First of all, games on the Web don't use the DOM much. They use the faster Canvas API. The DOM is made for changing content on a document (that's what the D in DOM stands for), so it is a really bad fit for games.
How is it possible that my crappy phone can run Call Of Duty seamlessly, but it's so hard to write a big webpage that will run smoothly on it?
I never had performance problems with the DOM. Of course, if you update the whole <body> with a single .innerHTML assignment 60 times a second, I wouldn't be surprised if the performance is bad, because the browser needs to:
Parse the HTML and construct the DOM tree;
Apply styles and calculate the position of each element;
Render the elements.
Each of those steps is a lot of work for the CPU, and the process is mostly single-threaded in most browsers.
You can improve the performance by:
Never using .innerHTML. .innerHTML makes the browser transform HTML into a DOM tree and vice-versa. Use document.createElement() and .appendNode().
Avoid changing the DOM. Change only the CSS styles, if possible.
Generally , it's depend about the game . the most powerful games are developed in C++ or C engine , so they are directly in touch with the memory and use the full power of processor.
Instead to web pages based on DOM , they are wrote it by interpreted language like JavaScript. Also , the problem can be from the server if the webpage it's deployed not correctly or in a bad slow server .
I'm thinking about learning WebGL and the first thing that comes to mind is that JavaScript is client-side; what approach (if any) is used to have server-side JavaScript specifically related to WebGL?
I am new to WebGL as well, but I still feel that this is a very advanced question you are asking. I believe it is an advanced question because of the scope of answers that exist to do what you are asking and the current problems related to proprietary WebGL.
If you have done any research into WebGL you will immediately see the need for Server Side code due to the fact that the WEbGL API code is executed within the browser and thus freely available to any knowing individual. This is not a typical circumstance for game developers who are used to shipping their code compiled.
By making use of server side controls a developer can hide a large amount of WebGL translations, shaders, and matrices, and still maintain a level of information hiding on the client side code. However, the game will never work without an active internet connection.
Since WebGL is still relatively new, and IE does not support it, expect things to change. M$ may decide that they want to build their own web API like WebGL that ends up being an ASP.NET library. All of the required complexity that currently goes into building a solution to the problem you are facing gets condensed into a 3 button Wizard.
With that being said I think the answer to your question lies in the fate of some future technologies. For bigger goals there will more than likely be a large amount of back and forth communication; protocols like HTTP may not cut it. WebSockets or other similar technologies may be worth looking into. If you are attempting to use Canvas for something smaller just an understanding of building dynamic JavaScript may be enough.
The problem with these answers is that OpenGL is an API itself and has a specific order of operations that is not meant to be changed. This means that this approach to building a WebGL applications is very limited. Since changing GL objects may require a whole Canvas restart, a page refresh, or a new page request. This could result in effects not desirable. For now I would say aim low, but ones thing for sure WebGL is going to change the www as we web developers know it.
I'm not sure what you are looking for, probably not this... :)
but...
If you want a server side fallback for browsers not supporting WebGL, lets say for generating fixed frames as png images of some 3D scene, then you could write your 3D veiwer in C or C++, build it for OpenGL ES when targeting your server side fallback, and use Emscripten to target browsers supporting WebGL.
I came across this proof of concept earlier today (on TechCrunch.com) and was blown away and intrigued as to how they had managed to accomplish the end result. They state that they don't use webGL or any plugins yet they are able to interact directly with the GPU and render 3D visuals at up to 60 fps using nothing but Javascript. Any ideas how this could be done, or how to access the GPU from Javascript in general without the use of plugins?
Site Address is: famo.us
ps: Try using your arrow keys to shift orientation its far out!!
They're using standard HTML5 Javascript APIs to achieve this.
I saw several references to requestAnimationFrame in their code. This method allows one that uses a compatible browser, to display more fluid and optimized animations, with far better frame rates than setInterval based animations will ever allow you to have. This is certainly achieved by using the GPU and the available hardware background.
GPU or every other hardware component cannot be accessed directly using any Javascript routines. Instead, the browser, based on the called JS directives and the execution context, will use the GPU if possible to optimize some specific treatments, calculus and renderings.
EDIT
For future references, I recently found out (7 years after the original answer) that a new initiative - the W3C GPU for the Web Community Group, has recently been created in 2020 to do just that. It describes itself as follow.
The mission of the GPU on the Web Community Group is to provide an interface between the Web Platform and modern 3D graphics
and computation capabilities present in native system platforms. The
goal is to design a new Web API that exposes these modern technologies
in a performant, powerful and safe manner. It should work with
existing platform APIs such as Direct3D 12 from Microsoft, Metal from
Apple, and Vulkan from the Khronos Group. This API will also expose
the generic computational facilities available in today's GPUs to the
Web, and investigate shader languages to produce a cross-platform
solution.
In the long-term, this initiative might allow developers to directly interact with the GPU from all web browsers. You can track the implementation status of the WebGPU API Spec on Github here.
Concerning famo.us: they did analysed the bottlenecks of the Webkit rendering pipe and then found a way to bypass them while building the pages. Basically: the DOM tree construction, the Render tree construction, the Layout of Render Tree are bypassed. Take a look on this article for a whole explanation.
They're using CSS 3D transforms. Web browsers are increasingly using hardware acceleration to do these kinds of CSS3 things.
i think the webGL glsl.js library might be good for this, though i havnt seen benchmarks...
https://github.com/gre/glsl.js/
also this approach seems viable;
basically to use the gpu in the way we like to, hardware optimised functions (have a little look into "blas") are used, you do not want to write these! strangely it seems that people still use the old fortran blas.... there is some stuff with compiling via llvm and then using emscripten to turn it into javascript.
Use Emscripten with Fortran: LAPACK binding
the emscripten way seems the most versatile, im just about to check it out, but it looks like a mountain, also this topic seems to be somewhat of a call to arms, emscripten only works with fortran if you hack it (see links from the second link). i think what we are looking for is blas support in javascript, this is not a closed issue by any means, and for a few of us out here it is very frustrating! hopefully someone has a link to these blas libraries we cant find...
please let me know if i dont quite have my head round this issue, js is new to me.
also, suggesting html5 is sufficiently "optimised" to use the hardware, how can we trust this!? we need gpu optimised blas (basic linear algebra subprograms) e.g. dot product.
i think maybe also that these old fortran blas arnt actually the right thing for a modern gpu? node-cuda seems very likely the best thing i have found...
When I search for performance of Javascript libraries I get many sites showing the comparision of performance between the following popular libraries
jQuery (pretty slow)
Prototype(very slow in IE)
Dojo ( fastest when comes to DOM )
ExtJs (average)
Micro JS( slow but OK )
But in any of the benchmarks, Google Closure Library is NOT included. Is it not like any other standard library for it is said that it is a procedural style library.
I need some benchmarks on the performance of Closure library. And would like an advice on "Is switching to Closure library good when using dojo at beginner stage and jQuery at some intermediate stage"
Google posts that it uses the closure library in all its apps like Gmail, etc... The performance is very good. Is this because of the library ? Can an intermediate javaScript coder who can write OO code in JS use Closure library to very high level, or is it advisable to continue using DOJO.
On Closure Library
The Closure Library is pretty close to Dojo in style -- actually, when it was first developed, the authors took inspiration from Dojo.
However, the speed and power of the Closure Library came from the Closure Compiler, which heavily optimizes a JavaScript program in order to remove all the bottlenecks (such as navigating chains of namespaces).
I personally don't like it a single bit as it detracts from the beauty of Dojo class-based constructs (simply to satisfy the compiler) and all those goog.kitchen.sink.getMeACupOfTeaSoICanRelax() long namespaces make writing (and reading) JavaScript programs a royal pain -- the fact that long namespaces are all optimized away by the compiler does not make it pretty (to me) to overuse them because you can.
In addition, its obsession with trying to make JavaScript programs look as much OOP as possible (perhaps because there are tons of Java programmers in Google) means over-reliance on OOP concepts like property getters and setters and the avoidance of many useful (and unique) JavaScript features like mixin's. If you are a Java programmer learning to program in JavaScript, you'll be right at home using the Closure Library. That doesn't make it any bit elegant.
It does, however, offer an industrial strength environment that is rock solid -- since Google has built HUGE sites with it. It is something that (in my personal opinion) is solid and works well, but looks ugly.
However, Dojo is rock solid as well, but more volatile since it is an open-source development project. You decide whether you'd like to switch.
On the Closure Compiler and Dojo
Actually, you can use Dojo with the Closure Compiler in Advanced Mode also. See this link for a description on how to do it. Based on my own tests, a program compiled by the Closure Compiler is typically around 25% smaller than minified versions (due to dead-code elimination) and runs about 20-30% faster for simple pages and more for large pages.
On Speed of Libraries in General
Other libraries all have their own characteristics and quirks, and each balance useability, flexibility and power with performance. For example, jQuery creates many many jQuery objects on the way and has a performance penalty, especially on older browsers. However, modern browsers, esp. Google Chrome, actually does optimizations so that the performance hits of using jQuery is minimal.
You actually need to ask yourself why you need JavaScript to run fast. Most modern browsers are already quite fast so that it is really not a very important consideration regarding choice of library. Better choose your library based on whether it suits you or not (and the task you're at hand) instead of whether it runs 10ms faster in a browser.
If you are writing a web site for mobile devices, or writing an HTML5 game for instance, you may need to squeeze the last drop of performance (in games) and/or save as much resources as possible (in mobile). In such cases, I find that using Dojo and then compiling with the Closure Compiler yields one of the best combinations for such scenarios.
Would be nice to back this up with some data. There's lies, statistics and benchmarks. You may for example ask yourself why framework x is slower then framework y. In general though, benchmarks do not reflect real world scenario's, and end to be fun, but useless. I've worked with jQuery for a few years now and I've not encountered situations where it's slow, or that it had any effect on the usability.
I your code is too slow and you need to optimize it. Then you may benchmark, profile and debug the sweet jesus out of that code. Things like gmail are likely to have had that kind of treatment. So gmail is probably not fast because of closure, but fast because of intelligent programming.
If you're affraid of slow code, you might want to learn generic javascript optimization methods, and you'll be a happy camper with any framework.
edit: on closure
Choosing closure might have several advantages when you're building big and complex: there are some nice bandwith optimization tools, tools for managing more complex applications. There is a good guideline on how to write proper code, and tools to check that you followed these guidelines (closure linter, quite interesting). There's a nice templating system (probably won't improve your performance per se) and dependency management stuff. If you wish to integrate these tools, or build upon it, I would say go for it, otherwise, you may rest easy and be happy with your choice for dojo.
using dojo at beginner stage and jQuery at some intermediate stage
You've got that the wrong way around. Dojo is far more advanced than jQuery. jQuery itself is for beginners.
Can an intermediate javaScript coder who can write OO code in JS use Closure library to very high level, or is it advisable to continue using DOJO.
Either library will work. Spend a few hours playing with google closure and see how you find it. I doubt there's that much difference apart from general easy integration with google libraries like google charts.
I'm going to develop a comprehensive educational software which runs on the browser and has many visualization and simulation works (electrostatic and electromagnetic visualization, 2D and 3D).
Which language(Processing, javascript or something else) is the best toward my purpose?
The question is indeed broad but I will answer from the experience I've had.
Javascript is not really meant to do mathematical calculations, which is what might be necessary to calculate a lot of E&M phenomenon quickly (Especially if they are not represented as a closed form solution). It really goes into how much detail you want in your graphs as well (More steps = more calculations). You may find yourself needing to do more optimizations to make up for the performance difference.
I did some visualizations of antenna arrays (They had closed form solutions, only simple arrays) in Flash and it worked out ok. Javascript will definitely not be up to par with any 3D simulations you might want to do.
I wonder if Silverlight might be a better solution, because you may find more mathematics libraries for .NET than for Actionscript, that could save you a lot of work of writing the math out yourself (But you might end up doing this anyways because of the performance issues).
As others have suggested javascript is not that strong of a language when it comes to visualization.
Processing is a really good language for what you're trying to do, it's easy to learn and is Java based. Data visualization is built directly into the language, as well as temporal space (ie advance "1 tick" in time and have the visualization react to that.)
Also if you're interested in going that route I'd suggest picking up Visualizing Data which is pretty much a Processing primer.
Flash may be the more common application stack right now for what you are looking for, but Silverlight is looking primed to take the title from them based on the powerful features that it contains.
I would go Flex or Silverlight myself
Plenty of re-usable libraries
Native support for multimedia
Native support for graphics and animation
I'm a little late to the show, but what you want, has been implemented in JavaScript, and you'll find this incredibly useful. I recommend running it under Chrome as the JS processing engine is extremely fast. (You may even want to try Chrome 2 which is even faster)
http://ejohn.org/blog/processingjs/
http://ejohn.org/apps/processing.js/examples/basic/ (91 basic demos.)
http://ejohn.org/apps/processing.js/examples/topics/ (51 larger, topical, demos.)
http://ejohn.org/apps/processing.js/examples/custom/ (4 custom "in the wild" demos.)
See also: http://www.chromeexperiments.com/
I second LFSR Consulting's opinion: Processing is used a lot for educational purposes, it is free, and fast (Java is faster than Flash in general) and easy to learn, so you have faster results. It supports 3D, you can tackle Java libraries for simulation and computing, etc. And it has a great community! :-)
JavaScript is a bit light for such usage. JavaFX is hype, but it hasn't really 3D (although one used Java3D with it) and it is still a bit young.
Flash and Silverlight: no comment, not much experience in the field. OpenLazlo can be an alternative...
You really have two choices ActionScript in Flash or VB.NET/C#/other in Silverlight.
So first you need to decide which of these platforms you will target.
You may be able to split the problem into two parts, the user-interaction and display part, and the heavy calculations part.
If you can move the heavy calculations to a server then you can still show everything in javascript.
One difficulty with javascript is that it is interpreted and you will need to write more of the equations yourself, so there is a performance hit and development time, but it will work without any plugins, unless you don't want to do 3D in the canvas tag.
Flash and Silverlight may have better options, but then you are learning new languages and requiring plugins, depending on what version of Flash you want to use.
Check out processing.js, xcode, and iprocessing!
ProcessingJS is great for data visualization but lacks in interactivity.
You should probably try python. It is a really good language for educational and computational purposes it has a pretty decent community and the syntax is not so tough. Even though it was designed to for command line you can create front end gui's for it using some external package and it also provides packages like Scipy, Numpy and Matplotlib for advanced plotting and data visualization.