I am developing a web application (JS/CSS/AngularJS) which is going to run on a touchscreen client.
As the application grows I run into a performance problem: I mentioned that the applicaion is not running with 60fps but 30 or something. Therefore I used the chrome built-in timeline tool to profile what's going on under the hood.
Okay - there are many long frames which limit the fps to ~30. The reason are the large GPU blocks (green) you can see in the screenshots).
Zoomed in:
It seems that the GPU is taking much time to... render? calculate? ...?
Unfortunally I don't have any further information what the GPU does in this time. There are some CSS effects and transforms but nothing really spectacular.
It's running on a Intel HD Graphics 530 (onboard) but I think is should even perform with 60fps on this one.
I'd like to know if there is any way to get further information what is slowing down the GPU that much and/or if there are any common CSS tricks to speed up my page. Or is there anything I can tweak at the GPU itself?
Thanks!
Related
I am working on a HTML canvas app that displays graphics to the screen. The graphics aren't important for the website, they just make things look pretty.
My problem is the app needs the cpu of the device to run at a certain speed or the frame rate becomes unacceptable.
Any modern phone/laptop can easily run the app, but of course not everyone has updated tech.
At that point I'd like to drop support for the device and stop rendering the animations because it will just do more harm than good.
This idea is pretty standard on the internet. For example if you want an image background for your site, but you don't want mobile phones to load the same large image as desktops then you just use some css queries to only serve the image to desktops.
This is how we can get new features pushed out while keeping backward compatibility.
However when it comes to detecting performance this isn't as easy of a task as it sounds. There's no way to get cpu specs with Javascript, and even if I could there's no way of telling what else the user is running on their machine.
This leaves me with 2 options, either run a small performance test before I start the canvas app. Or start the app and try to run a few frames and stop it if the frame rate is too low.
The problem is both of these options are pretty sketchy because the device may just have a "speed hiccup" at the start of the app, and so I shut down the animations for nothing.
Also if a user has a device that sits right on the border of the threshold sometimes the animations will load, and sometimes they won't which would probably be confusing.
Is there any "standard" on the internet to deal with this sort of problem? Would it be best to leave a footnote at the bottom of the site window when animations are turned off?
Or is it just something you have to accept when pushing the boundaries and dealing with performance?
I'm at the early stage of building my own game framework, based on CreateJS (especially on the feature of exporting everything from Flash IDE). And found out that framerate of CreateJS (EaselJS) is much worse in Firefox than in Chrome/IE.
Also, it seems that the framerate of app (which might be changed using Ticker.setFPS) doens't matter. It looks like Firefox has some problems with rendering (I've tried to use 60fps and 30fps, and in both cases there were problems, it looks like FF doesn't have any stable time/logic of rendering).
I've tried to play with Ticker.timingMode (set it to Ticker.RAF_SYNCHED), but it didn't help either.
And also, I've found a lot of similar topics/questions in the Internet, and no any clear answer.
So, I was wondering, if there is any way to improve framerate/rendering in FF or we should work with it as it is now?
P.S.: It looks like the problem, probably, partially might be on the CreateJS side, because I've found few nice HTML5 games (as I know they don't use CreateJS) with smooth and nice animations in FF. Here an example: https://www.netent.com/games/slots/dazzle-me/
EaselJS just facilitates Canvas drawing operations and updates. I have built a number of applications and games without FireFox-specific performance issues using the CreateJS framework, so while there might be something that is happening through CreateJS causing the issue, it is more likely related to the content and how CreateJS is used.
Different browsers have different issues and performance differences, including how they handle vectors, large images on the GPU, etc. Without seeing code or an example, it is hard to pinpoint where the performance is going.
What have you tried so far to isolate the performance?
Do you have a lot of large images, text, or vector content?
Are you using filters or Shadows?
How are you determining that the framerate is not working? RAF_SYNCHED attempts to normalize the computer's RAF framerate (usually about 60 fps, but dependent on lots of things).
Do you have a lot of children (like particles) going on?
Are you checking mouse position or hitTesting often?
If you can provide more info, code samples, working demos, etc you can likely get a better diagnosis of what is happpening.
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
My website has a jQuery script (from Shadow animation jQuery plugin) which constantly changes the colour of box shadow of various <div>s on the home page.
The animation is not essential, but it does take up a lot of CPU time on slower machines.
Is it possible to find out if the script will run 'too slowly'? I can then disable it before it impacts performance.
Is this even a good idea? If not, is there an easy way to break up the jQuery animate?
This may indirectly solve your problem. Pick a few algorithms and performance tests from this site http://dromaeo.com/ that seem similar to your jQuery plugin. Don't run comprehensive tests as they do on the site. Instead, pick fairly small and fast algorithms, and run them for an unnoticeable period of time.
Use a tiny predefined time span to limit how long these tests are allowed to run. Let's say if that span is 200 ms, and on a fast machine with browser A, you can get 100 iterations, while on some random user's machine, it's only able to complete 5 iterations, then you may want to consider disabling it on the user's machine. Tweak and tweak till you find the optimal numbers.
As a bonus, send all test results back to your server so you have a better idea of where your users lie in the speed spectrum. If a big majority of users are using slower computers and older browsers, then it just may make sense to remove the thing altogether.
You could probably do it by timing a few times round a loop which did some intensive processing on page load, but that's going to slow the page and add even further to CPU load, so it doesn't seem like a great solution.
A compromise I've used in the past, though, was to make the decision based on browser version, for example, Internet Explorer 6 users get simpler content whereas newer browsers with better JavaScript performance get the animation. That seemed to work pretty well at a practical level. In practice, browser choice is a big factor in JavaScript performance and you might get a 90% fit with what you want very simply just by taking that into account.
You could do something like $(window).width() to get the browser width. Using this you could make the assumption that anything < 1024px wide is likely to be either a netbook, smartphone or old computer.
This wouldnt be nearly as accurate as timing a loop, but much more efficient.
Obviously its this rule's a generalisation and there are will be slow computers with > 1024px. But in general a 1024px + computer would typically be able to handle a fair bit of javascript (untill the owner puts on loads of software, virus scans and browser toolbars!)
hope this is useful!
I'm developing a site which has a flash background playing a small video loop scaled to fill the whole background. Over the top I have a number of HTML elements which are animated using javascript. The problem I am having is that (predominantly in FF, but also in others to a lesser degree) the flash seems to be causing my javascript animations to run rather jerky, and in some cases missing the animation altogether and just jumping to the end state.
Does anybody have any thoughts on how to make the 2 work together nicely?
Many thanks
Matt
You'll notice the same effect on BBC Iplayer - if you've played a few videos, then use the left and right scroller. The javascript animation is no longer smooth.
The is more noticeable in FF.
Chrome creates an entirely separate process for the flash, and therefore smoother, Safari is quite lightweight therefore smoother at times.
Bit of a bugger really - the only thing I can suggest is ensure your swf is optimised for CPU - if it contains lots of code, ensure you doing good memory management.
I had the same trouble once and I targeted FP10 - this offset a lot of visual work off the CPU (therefore the current process in the browser) and gave it to the GPU.
--
Aside from this, you're pretty much at the mercy of how powerful the clients machine is.
Additional for my answer above:
Thanks Glycerine. Do you think there would be any performance improvements if it was compressed into an older format? Or even just a SWF? There is no audio, so it's just an animated background really. – - Matt Brailsford
I think a newer format would be better - if you can do FP10, then again, you'll be able to utilise the user GPU, if your working in CS3, best to go for FP9.5.
Ensure your stage objects are cached for bitmap if your using large vectors
http://www.adobe.com/devnet/flash/articles/bitmap_caching_print.html
This ensures any heavy animation (even animation we regard as light) will run smoother because there turned into pixel data as opposed to complicated vector information. Its a small fix but it may work.
Try and target the AS3 engine as well. Even if your not using code. I keep saying it runs better than the as2, as1 engine with arguments from people but I'm sure you'll find your favourite camp.
If you have very large images scaled down, use a smaller form factor by photoshoping then to a smaller size. This will not only improve rendering speeds, but also swf file size.
Try those.