I want to know what are the minimum system requirements to run a webGL application smoothly?
I know this is a broad question,so what would be the minimum requirements to run the following experiments?
1.) http://www.ro.me/
2.) http://alteredqualia.com/three/examples/webgl_loader_ctm_materials.html
I would have done this myself,but i don't have a machine with low specs,and yes i did try google.
There are no minimum requirements in terms of system specs. I guess if you wanted to be really pedantic you could claim that a system needs a GPU capable of processing shaders (roughly DX9 level) but that basically describes every device with a GPU these days. (Except the Wii. Go figure...)
On a more practical level, you must have a device who's drivers have not been blacklisted due to instability issues. You can typically find the blacklist for a given browser by following the "visit support site" link on http://get.webgl.org/
Beyond that, however, it's all a matter of what the WebGL app is trying to render and what the device's bottlenecks are. For example: I've tried several WebGL demos on a Galaxy Tab with Opera and Firefox, and the framerate is limited to 10-12 FPS no matter what I'm rendering. A full, complex scene renders at the same speed as a single cube. This suggests that the device is probably getting stuck on the compositing phase (where it places the WebGL canvas in the page itself) and as such while the device may be capable of some very complex rendering the limitations of the browsers are holding it back. In comparison, I've seen highly detailed scenes running on iOS devices (with a hacked browser) of similar power at a solid 60FPS. So the exact hardware you have is only one part of the overall equation.
But really the best answer you can possibly get at this point is to pick up a device and try it!
Related
In webgl, a long running glsl code can freeze all the computer.
When browsing shadertoy, some examples especially in fullscreen mode have frozen my mac like this one:
Path Tracer MIS (progressive)
Is there any way to detect whether a shader is taking too much and then to auto-kill it from the javascript level? Or this is above developer reach and needs to be handled by browser developers?
Another question: is there a way (plugin or external application in mac or linux) which prevents a long chrome GPU access from freezing the computer?
It's not just WebGL, all GPU code can freeze the computer. The problem is, unlike CPUs, current GPUs are non-preemptable. That means if you give them 30 minutes of work to do they will execute that work for 30 minutes and there's no way to preempt them to do something else like you can with a CPU. (no multi-tasking)
The solution on a good OS is the CPU runs a timeout and if the GPU takes too long the OS resets the GPU (effectively it reboots it). Windows has done this since Windows Vista. macOS has only kind of added it recently and the issue for them is they didn't design the OS to recover from those kinds of situations like MS did so it's taking them longer to get it all working.
The good thing is there's not much incentive to do it on purpose. If you visit some website that locks your computer you're probably going to stop visiting that site so the problem is usually self correcting since people want you to visit their site.
On the other hand it's a problem for sites like shadertoy where people write extremely long and complicated shaders that are really only designed to run on top end GPUs and not common notebook GPUs.
As far as what you can do about it.
Ask to Apple to fix their drivers and OS so it can't be crashed.
Ask Shadertoy to not automatically run heavy shaders
Maybe they could add some timing info or other heuristic to decide
whether or not to warn the user the shader might be too heavy for
their machine.
Ask to GPU companies to make their GPUs preemptable.
Otherwise there's not much even the browser can do. The browser has no idea if the shader is going to be fast or slow before it runs it.
Recently I've been working with Three.js in order to render 3D scenes in WebGL. As a fallback, I've also maintained versions using Three's CanvasRenderer that, while they do not have as many polygons, lighting techniques, or effects, can still run in browsers that do not have WebGL capability (and can also run in Safari if the user refuses to enable WebGL themselves).
However, today I realized that compatibility is not merely a question of whether or not the user's browser supports WebGL, but also whether the user's hardware is even rigorous enough to perfectly render WebGL in the first place (i.e. run at 60 fps). While there is a lot of data out there illustrating what percentage of the population uses which web browser, I had trouble finding distributions of users who are running computers more than capable of running WebGL.
What exactly is the percentage of the population that -- assuming they are using a WebGL capable web browser -- can run your average WebGL page at 60 fps? And if it is a large amount of the population who struggle with rendering in WebGL, what would be the best way to detect such shortcomings in hardware? A javascript solution would be ideal since we are already working with Three in that language.
It is very possible that I may be misunderstanding the situation as this does not seem to be a widely discussed issue in the world of WebGL development. If such is the case please let me know so I may better understand how to work with Three in the future.
I had the same issue professionally working with Android - we had a 3D intensive app prior to release of a hardware accelerated Android OS, it ran smoothly on some phones and horribly on others. This issue crops up on any open platform like Android or Windows. iPhones/iPad have a very explicit hardware set, but on the PC you will be getting a range of video cards.
If you use Cg/HLSL shaders you compile to a specific hardware profile which describes the lower bound of the vertex, geometry and pixel shaders. However I'm guessing you're using GLSL shaders and compiling at runtime? GLSL does not target hardware profiles and will simply fail to compile the shader if the hardware in incapable of running it.
Of course, this only gives you an idea of whether the program can execute, not how well it can execute. A shader can compile just fine and run at 5fps.
I was worried about users seeing a bad graphics experience so for our app on the first run I rendered a model to an offscreen target for a brief time and took an average FPS, then disabled the 3D section if the score was too low. You know what? THE USERS HATED IT. Plenty of them had 3D-capable phones and felt cheated that they couldn't see the 3D sections that were in the screenshots. I pushed an update to remove the check, so now if you have bad hardware you'll see the 3D scene running slowly. And we actually had fewer complaints with that experience, even though I hated how inelegant it seemed.
On the positive side, both OS X and Windows (since Vista) support hardware accelerated compositing in the OS (GL/D3D renders all your windows to the screen now). This has pushed most every PC vendor to include 3D acceleration in their computers. So I think it's a safe bet to assume people have a 3D card - whether they have WebGL will be more of a limiting factor. The variety of hardware and drivers also gives you wacky rendering bugs on one machine that appear nowhere else - PC game developers test on a range of cards prior to release in an attempt to mitigate that issue. But these wacky bugs are going to be a tiny fraction of your users at most and are often trivial, so it isn't worth worrying about unless users are complaining.
If you're still worried about performance, the standard solution on the PC is to allow the user to adjust the graphics settings to reduce the complexity of the pixel shader / number of triangles on the screen / etc. Make sure to look at real-world data about what hardware most of your customers own and use that range for targeting and testing.
Anyway, there's a real-world anecdote, HTH.
if you work with three.js you can query for extensions:
gl = renderer.getContext();
exts = gl.getSupportedExtensions();
so if you don't get something like float textures, you get some idea of what you're working with. WebGL is built around the lowest common denominator.
I have a JavaScript application, which works fine, but certainly needs some memory / CPU performance (it is based on top of Google maps).
So basically it runs fine on a Desktop / Laptop PC, iPad works OK. But with all these different devices nowadays, smaller devices are definitely overloaded.
So basically I want to determine the available runtime and decide whether I start my JavaScript application, or display a message instead.
OK, I could check the browser, but this is a never ending story
I could check on the OS information, but again this is very tedious. Also some OS run on high performance devices as well as low end hardware.
I could check on the screen size and rule out mobile phones, but then it gets difficult
So is there a way to determine the performance of the client? Just to make it clear, I do not care whether it is a tablet for instance, but if it is a low performance tablet with little CPU performance / memory (so Detect phone/tablet/web client using javascript does not help).
In javascript You could check list of features and based on that do conclusion if device is low memory.
For example:
Check core count with window.navigator.hardwareConcurrency
(For better coverage add Core Estimator polyfill)
Check WebGL support (I like user2070775 suggestion): Proper way to detect WebGL support?
Check if its desktop or mobile. Most likely desktop will not have any problems with memory.
Check resolution of screen - most likely small resolution will be on low memory devices.
I have an app in which user uploads images from computer and then draw images on the canvas.
In chrome and firefox i'm using filereader. But if user uploads a very large image, it doesn't load properly or doesn't load and can't be drawn onto the canvas.
I have tried it by accessing the same file direct from computer and it works fine. So is there any way to increase the browser memory so that large images load properly?
Or there is some other problem!!!
In my experience you can reliably bet on 5MB minimum for the platforms you mention above. Keep your data below that level and you should be pretty safe.
Read this article. http://diveintohtml5.ep.io/storage.html it has some nice nuggets of info, but it's not all accurate, especially the part that says you cant up the limit.
I know for a fact that on iPhone once you reach the limit the phone will ask the user if they want to allow more space. (Sort of accurate, but not entirely)
On Android platforms the heap memory limit is set at 12MB. Not sure about the other platforms. Since you are going to be running in some kind of webcontainer (Webkit or other) I wouldn't worry too much about it. The containers themselves are pretty good at managing memory and implementing file caches to minimize their footprint.
I recommend you leave the memory optimizations and such for last. Who knows, you might not even need it. Dont optimize prematurely.
PS:
Look at Phonegap: http://phonegap.com/
I am developing a web application that is supposed to display land traffic in real time in any part of the world. For a couple months I've been developing it using JavaScript and OpenLayers (http://www.openlayers.org) framework.
Unfortunately, this solution appears to be inefficient. There are hundreds (200-300) of objects on the map that are updated every couple minutes. The sheer operation of refreshing and rendering them takes significant amount of time that makes the
application less usable (slow responsivity to user actions).
At the moment I am considering changing the technology. Adobe Flex seems to be the most reasonable solution. There is at least one application written in it that does similar things to mine (http://casper.frontier.nl/).
However, I have a couple of concerns regarding Flex:
can it be easily integrated with the
HTML/CSS/JavaScript based part of the
application (for example, the
graphical interface should be
coherent when it comes to styles and
colors)?
I get an impression that
with latest browsers (mainly Chrome
9.0) JavaScript and CSS becoming more efficient. What are the chances that
in a couple of months JavaScript+CSS
will make it possible to implement
efficient, flash-like rich internet
application? (A note is needed here:
famous Canvas tag is not a solution
for my problem, at least not for now.
Rendering objects on map with canvas
proved to be less efficient than with
traditional SVG because the size of
the canvas is really big - a whole
browser window)
What are the chances
that Flash technology will be
abandoned (Apple policy, HTML5
growing support etc.) in
not-so-near-future (a couple of
years)?
There is a possibility that my client would like to view this application on mobile devices, including iPhone.
Any other solution for web-based interactive maps?
Can anybody address these issues?
Lazy comment repost:
I've used Google Maps JavaScript API + a custom canvas tile layer (see here and here) to draw maps with 10k+ markers, really quickly. Perhaps you just need to rethink your particular approach rather than totally rewriting your maps.
JavaScript running on a modern browser (say, IE7 or later) should easily be able to handle 200 or 300 object updates every few minutes. Granted, if you want to do all 300 updates at the same time, things might get a little slow. But if those updates occur spread out over that period, then you shouldn't have any trouble.
There are Asteroids games and 3D shooters written in JavaScript and that are very playable. They do dozens of updates per second.
I would suspect your framework (I know nothing about OpenLayers) or the way in which you're doing the updates before I suspected the platform.
My experience with Flash has been less that positive. Although it will interoperate with JavaScript, there are some strange edge cases that will trip you up. And my experience is that it's almost impossible not to trip over those edge cases unless what you're doing is truly trivial. And, of course, the lack of Flash support on the iPad and iPhone will make supporting those platforms impossible.
I think it's unlikely that Flash will be abandoned any time soon, as there are too many customers who continue to believe the silly notion that Flash is the way to build interactive Web apps. Although that was almost certainly true four years ago, browsers, computers, and JavaScript techniques have advanced to the point that the only use I currently have for Flash is to play video. And that use will go away in the next few years when HTML5 video becomes more prevalent. With Google's WebM video format and the expected high-quality tools to build WebM, Flash becomes almost irrelevant as a movie player, except for older content.
My advice would be to take a long hard look at your current implementation, study some other JavaScript applications that do frequent updates, and determine if it's really the platform rather than the way you're using it that is causing your performance problem.
No idea how many objects you can manage and update in js, but in my company (flashmaps.com) we have built flash-based maps handling many thousands of objects. The key issue in many cases in fact is that the map is completely overlapped by the markers. We use to recommend filtering the markers in those cases. We have a huge experience in building flash/flex-based maps, so on't hesitate to ask me any question on that.
By the way, I don't think Flash will get out of use soon. Apple's strategy on controlling iPhone/iPad apps (the real reason behind Flash banning) is causing a lot of trouble to web developers, that need to create specific versions of their websites for these devices, it's crazy. But I'm sure Apple will permit Flash, someday... Probably when many other tablets hit the streets supporting Flash. We'll see.
The awesome thing about MVC architecture is as long as you keep your domain logic separate from your business logic and UI, then it's relatively easy to create platform specific apps that access the same data. For example, you could build the same UI to run in the web browser (via html/javascript or flash player), on the desktop (via Air), and on an iPhone/iPad (via iOS) that all connect to the same server-side scripts. It's all a matter of personal choice which platform you choose. If a platform happens to fall out of fashion in the future, then you simply create a new UI on another platform.