I'm creating a flock simulation using html5 canvas and I'm trying to squeeze every bit of performance out of javascript. I've noticed a "weird" performance boost/blow depending on when/where I add boids to the system.
If I fill the boids array with 400 objects first and then start the animation (using requestAnimationFrame) I get a very decent 40-50fps in Chrome and Safari, and around 30fps in Firefox.
However if the boids array gets filled with objects (400 of them again) during the animation (for example by dragging on the screen), than no matter the browser the performance always drops to about 15-20fps.
In both cases I use boids.push( new Boid() ); to fill the boids array. In the first case I do it from within a for loop and in the second I do it from the mouseDown event handler.
Any idea why the performance of the first would be so much better?
You can find both of the examples here:
Version A and Version B
If your reuestAnimationFrame callback execution takes longer than ~10ms fps drops and you will janks. So the first method probably meets that budget or sometimes slightly takes more since you are not getting ~60fps; but the second method definitely exceed the ~10ms on that frame and the browser have to skip 2-3 frames to complete the execution of that callback therefor you get ~20fps;
Check out this article on google web fundamentals Rendering Performance
Related
I'm doing some animation with Canvas now, and will be preparing a system for the artists to use to make interactive animations. I'll be using my own timeline as the scenes will be created from some declarative non-js input. My question is: what's the right way to handle the per frame callback and time measurement? In audio (my real-time background), the rule is that there should be only only one master callback method called by the audio system, and any other objects register with it somehow. And all time calculations are done by counting sample ticks of this callback so there is one and only one true clock source (no asking the system clock for anything, just count samples). I assumed this is what I should do in my canvas app but I'm seeing examples in books and sites where multiple objects use requestAnimationFrame, and then check the frame rate by using date objects to measure elapsed time. Am I off base in thinking one master callback is still the most elegant way to go? And can I rely on measuring time in frame ticks assuming I'm getting really 60fps if using requestAnimationFrame?
Your instinct is valid...route all your animation through one requestAnimationFrame loop to keep your animations well coordinated.
The current version of requestAnimationFrame in modern browsers automatically receives a highly accurate timestamp parameter based on the performance object. That timestamp is accurate to 1/1000th of a millisecond.
You cannot rely on counting the number of calls ("ticks") to the animation loop. The loop will be deferred if the prior loop's animation code has not completed or if the system is busy. Therefore, you are not guaranteed 60fps. You are guaranteed the browsers best efforts to get you 60fps.
Bottom line: requestAnimationFrame is not guarenteed to be called at 60fps intervals so you are left with 2 basic animation alternatives:
Use the timestamp to calculate an elaped time and position your objects based on elapsed time.
Increment a counter with each call to the animation loop and postion your objects based on the counter.
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
I'm writing a game in javascript canvas, and I experience lag sometimes. I think this is because I draw pretty much to the canvas.
My background for example isn't static, it moves from the right to the left, so each time, I have to clear the entire canvas. And I'm not clearing with just a color, I'm clearing with an image that moves every cycle of the gameloop.
I think this is an expensive operation and I was thinking if there is something to make this operation less expensive, like for example using double buffering for clearing the background (no idea if you can use double buffering just for clearing the background?).
Or should I use double buffering in the entire game?
(I don't think so because I've read that the browser already does that for you)
Since you experience your lag 'sometimes', and it's not a low frame rate issue, i would rather move my eyes towards the evil garbage collector : whenever it triggers, the application will freeze for up to a few milliseconds, and you'll get a frame miss.
To watch for this, you can use google profiling tools, in TimeLine / Memory / press the record button : you can see that a GC occurred when there's a sudden drop in the memory used.
Beware when using this tool, that it slows down a bit the app, and it creates garbage on its own (!)...
Also, since any function call for example creates a bit of garbage, so you can't have a full flat memory line.
Below i show the diagram of a simple game i made before i optimized memory use,
there's up to 5 GC per second :
And here the diagram of the very same game after memory optimisation : there's something like 1 GC per second. Because of the limitations i mentioned above, it is in fact the best we can get, and the game suffers no frame drop and feels more responsive.
To avoid creating garbage
- never create object, arrays or functions.
- never grow or reduce an array size.
- watch out for hidden object creation : Function.bind or Array.splice are two examples.
You can also :
- Pool (recycle) your objects
- use a particle engine to handle objects that are numerous / short lived.
I made a pooling lib (here) and a particle engine (here) you might use if you're interested.
But maybe first thing to do is to seek where you create objects, and have them created only once. Since js is single-threaded, you can in fact use static objects for quite a few things without any risk.
Just one small expl :
function complicatedComputation(parameters) {
// ...
var x=parameters.x, y = parameters.y, ... ...
}
// you can call creating an object each time :
var res = complicatedComputation ( { x : this.x, y : this.y, ... ... } );
//... or for instance define once a parameter object :
complicatedComputation.parameters = { x :0, y:0, ... ... };
// then when you want to call :
var params = complicatedComputation.parameters;
params.x = this.x ; params.y = this.y; ... ...
var res = complicatedComputation(params);
It has the drawback that previous calls parameters remains, so you don't get undefined if you don't
set them, but previous value, so you might have to change bit your function.
But on the other hand, if you call several times the function with similar parameters, it comes very handy.
Happy memory hunting !
Double buffering is a technique to reduce flickering: You draw to one buffer while another buffer is displayed, and then swap them out in a single operation, so the user doesn't see any of the drawing in a partial state.
However, it does not really help for performance problems at all, as there is still the same amount of drawing. I would not try to use double buffering if you don't have a problem with flickering, as it requires more memory and flickering may be implicitly prevented by the system by similar or other means.
If you think drawing the background is too expensive, there are several things that you could look into:
Do you scale the background image down while drawing? If so, create a downscaled version once and use this for clearing the background, reducing the drawing costs per iteration.
Remember the "dirty" areas and just draw the portions of the background that were obscured. This will add some management overhead but reduce the number of pixels that need to be touched significantly
Make the background the background image of the canvas DOM element and just clear the canvas to transparency in each iteration. If the canvas is very large, you could make this faster by remembering the "dirty" areas and just clearing them.
Are you sure painting the background is the main cause of lag? Do you still have lag when you only repaint the background and do not do much else?
I have a html 5 canvas using the 2d context. I am able to get up to 120 frames per second, but the rendering can be jagged, where the animation just jumps. I would like to know any ideas what may be causing it, especially with such a high (but pointless) frame rate? What are known ways or smoothing out animation as well?
The only thing that does come to mind, is the actual drawing is not being accounted for. So while the updating and drawing functions can be run quickly, the painting onto the canvas is stacked later. Which would then imply that I am not geting a true frames per second.
Although, I can get 120 frames per seconds that really means nothing. Because I am using setTimeout, I have not guarantees that the time will be constant, thus, when it does jagger, that is because the frame rate, for a moment, has dropped significantly.
However, there is an alternative in the works, that I managed to found. I'm a bit surprised how hard this was to find.
http://paulirish.com/2011/requestanimationframe-for-smart-animating/
https://developer.mozilla.org/en/DOM/window.mozRequestAnimationFrame
http://dev.chromium.org/developers/design-documents/requestanimationframe-implementation
From what I can understand, the function allows the browser to optimise for animations. In theory, this should give a more consistent frame rate, which should give smoother animations.
It is also quite interesting to compare how Chrome, Safari, Opera and Firefox draw. I mainly test on Chrome 14 dev and Mozilla Aurora 6.0a, and they way draw look very different. Chrome seems to be able to draw directly. Firefox seems to be piping the pixels, as if it's sending them one by one to be drawn.
Which leads me to Opera
http://www.scribd.com/doc/58835981/122/Double-Buffering-with-Canvas
http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/
turns out Webkit-based browsers and Gecko-based browsers use double buffers internally, that is, it collects all the drawing functions together and then draws them on the return of the function thread. If you have a main loop function, like update, it won't draw until that has returned. Opera, just draws them as the drawing functions are called, but it's not hard to implement double buffering. This is, supposably, another method of smoothing out the animation.
There is also another experimental feature that may help as well
http://badassjs.com/post/4064873160/webgl-2d-an-implementation-of-the-2d-canvas-context-in
I've been building Conway's Life with javascript / jquery in order to run it in a browser Here. Chrome, Firefox and Opera or Safari do this pretty fast so preferably don't use IE for this. IE9 is ok though.
While generating the new generations of Life I am storing the previous generations in order to be able to walk back through the history. This works fine until a certain point when memory fills up, which makes the browser(tab) crash.
So my question is: how can I detect when memory is filling up? I am storing an array for each generation in an array which forms the history of generations. This takes massive amounts of memory which crashes the browser after a few thousands of generations, depending on available memory.
I am aware of the fact that javascript can't check the amount of available memory but there must be a way...
I doubt that there is a way to do it. Even if there is, it would probably be browser-specific. I can suggest a different way, though.
Instead of storing all the data for each generation, store snapshots taken every once in a while. Since the Conway's Game of Life is deterministic, you can easily re-generate future frames from a given snapshot. You'll probably want to keep a buffer of a few frames so that you can make rewinding nice and smooth.
In reality, this doesn't actually solve the problem, since you'll run out of space eventually. However, if you store every n frames, your application will last n times longer, which might just be long enough. I would recommend that you impose some hard limits on how far into the past you can rewind so that you have a cap on how much you have to store. Determine that how many frames that would be (10 minutes at 30 FPS = 18000 frames). Then, divide frames by how many frames you can store (profile various web browsers to figure this out) and that is the interval between snapshots you should use.
Dogbert pretty much nailed it. You can't know exactly how much available memory there is but you can know how potentially large your dataset will be.
So, take the size of each object stored in the array, multiply by array dimensions and that's the size of one iteration. Multiply that by the desired number of iterations to see how much space total it will take, and adjust accordingly.
Or, inspired by Travis, simply run the pattern in reverse from the last known array. It is deterministic after all.