animation callback architecture and time measuring when using HTLM5 Canvas? - javascript

I'm doing some animation with Canvas now, and will be preparing a system for the artists to use to make interactive animations. I'll be using my own timeline as the scenes will be created from some declarative non-js input. My question is: what's the right way to handle the per frame callback and time measurement? In audio (my real-time background), the rule is that there should be only only one master callback method called by the audio system, and any other objects register with it somehow. And all time calculations are done by counting sample ticks of this callback so there is one and only one true clock source (no asking the system clock for anything, just count samples). I assumed this is what I should do in my canvas app but I'm seeing examples in books and sites where multiple objects use requestAnimationFrame, and then check the frame rate by using date objects to measure elapsed time. Am I off base in thinking one master callback is still the most elegant way to go? And can I rely on measuring time in frame ticks assuming I'm getting really 60fps if using requestAnimationFrame?

Your instinct is valid...route all your animation through one requestAnimationFrame loop to keep your animations well coordinated.
The current version of requestAnimationFrame in modern browsers automatically receives a highly accurate timestamp parameter based on the performance object. That timestamp is accurate to 1/1000th of a millisecond.
You cannot rely on counting the number of calls ("ticks") to the animation loop. The loop will be deferred if the prior loop's animation code has not completed or if the system is busy. Therefore, you are not guaranteed 60fps. You are guaranteed the browsers best efforts to get you 60fps.
Bottom line: requestAnimationFrame is not guarenteed to be called at 60fps intervals so you are left with 2 basic animation alternatives:
Use the timestamp to calculate an elaped time and position your objects based on elapsed time.
Increment a counter with each call to the animation loop and postion your objects based on the counter.

Related

WebGL/OpenGL: comparing the performance

For educational purposes I need to compare the performance of WebGL with OpenGL. I have two equivalent programs written in WebGL and OpenGL, now I need to take the frame rate of them and compare them.
In Javascript I use requestAnimationFrame to animate, and I noticed that it causes the frame rate to be always at 60 FPS, and it goes down only if I switch tab or window. On the other hand if I always call the render function recursively, the window freezes for obvious reasons.
This is how I am taking the FPS:
var stats = new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.left = '450px';
stats.domElement.style.top = '750px';
document.body.appendChild( stats.domElement );
setInterval( function () {
stats.begin();
stats.end();
}, 1000 / 60 );
var render= function() {
requestAnimationFrame(render);
renderer.render(scene,camera);
}
render();
Now the problem if having always the scene at 60 FPS is that I cannot actually compare it with the frame rate of OpenGL, since OpenGL redraws the scene only when it is somehow modified (for example if I rotate the object) and glutPostRedisplay() gets called.
So I guess if there is a way in WebGL to redraw the scene only when it is necessary, for example when the object is rotated or if some attributes in the shaders are changed.
You can't compare framerates directly across GPUs in WebGL by pushing frames. Rather you need to figure out how much work you can get done within a single frame.
So, basically pick some target framerate and then keep doing more and more work until you go over your target. When you've hit your target that's how much work you can do. You can compare that to some other machine or GPU using the same technique.
Some people will suggest using glFinish to check timing. Unfortunately that doesn't actually work because it stalls the graphics pipeline and that stalling itself is not something that normally happens in a real app. It would be like timing how fast a car can go from point A to point B but instead of starting long before A and ending long after B you slam on the brakes before you get to B and measure the time when you get to B. That time includes all the time it took to slow down which is different on every GPU and different between WebGL and OpenGL and even different for each browser. You have no way of knowing how much of the time spent is time spent slowing down and how much of it was spent doing the thing you actually wanted to measure.
So instead, you need to go full speed the entire time. Just like a car you'd accelerate to top speed before you got to point A and keep going top speed until after you pass B. The same way they time cars on qualifying laps.
You don't normally stall a GPU by slamming on the breaks (glFinish) so adding the stopping time to your timing measurements is irrelevant and doesn't give you useful info. Using glFinish you'd be timing drawing + stopping. If one GPU draws in 1 second and stops in 2 and another GPU draws in 2 seconds and stops in 1, your timing will say 3 seconds for both GPUs. But if you ran them without stopping one GPU would draw 3 things a second, the other GPU would only draw 1.5 things a second. One GPU is clearly faster but using glFinish you'd never know that.
Instead you run full speed by drawing as much as possible and then measure how much you were able to get done and maintain full speed.
Here's one example:
http://webglsamples.org/lots-o-objects/lots-o-objects-draw-elements.html
It basically draws each frame. If the frame rate was 60fps it draws 10 more objects the next frame. If the frame rate was less than 60fps it draws less.
Because browser timing is not perfect you might be to choose a slightly lower target like 57fps to find how fast it can go.
On top of that, WebGL and OpenGL really just talk to the GPU and the GPU does the real work. The work done by the GPU will take the exact same amount of time regardless of if WebGL asks the GPU to do it or OpenGL. The only difference is in the overhead of setting up the GPU. That means you really don't want to draw anything heavy. Ideally you'd draw almost nothing. Make your canvas 1x1 pixel, draw a single triangle, and check the timing (as in how many single triangles can you draw one triangle at a time in WebGL vs OpenGL at 60fps).
It gets even worse though. A real app will switch shaders, switch buffers, switch textures, update attributes and uniforms often. So, what are you timing? How many times you can call gl.drawBuffers at 60fps? How many times you can call gl.enable or gl.vertexAttribPointer or gl.uniform4fv at 60fps? Some combination? What's a reasonable combination? 10% calls to gl.verterAttribPointer + 5% calls to gl.bindBuffer + 10% calls to gl.uniform. The timing of those calls are the only things different between WebGL and OpenGL since ultimately they're talking to the same GPU and that GPU will run the same speed regardless.
You actually do not want to use framerate to compare these things because as you just mentioned you are artificially capped to 60 FPS due to VSYNC.
The number of frames presented will be capped by the swap buffer operation when VSYNC is employed and you want to factor that mess out of your performance measurement. What you should do is start a timer at the beginning of your frame, then at the end of the frame (just prior to your buffer swap) issue glFinish (...) and end the timer. Compare the number of milliseconds to draw (or whatever resolution your timer measures) instead of the number of frames drawn.
The correct solution is to use the ANGLE_timer_query extension when available.
Quoting from the specification:
OpenGL implementations have historically provided little to no useful
timing information. Applications can get some idea of timing by
reading timers on the CPU, but these timers are not synchronized with
the graphics rendering pipeline. Reading a CPU timer does not
guarantee the completion of a potentially large amount of graphics
work accumulated before the timer is read, and will thus produce
wildly inaccurate results. glFinish() can be used to determine when
previous rendering commands have been completed, but will idle the
graphics pipeline and adversely affect application performance.
This extension provides a query mechanism that can be used to
determine the amount of time it takes to fully complete a set of GL
commands, and without stalling the rendering pipeline. It uses the
query object mechanisms first introduced in the occlusion query
extension, which allow time intervals to be polled asynchronously by
the application.
(emphasis mine)

Pushing objects into javascript array performance weirdness

I'm creating a flock simulation using html5 canvas and I'm trying to squeeze every bit of performance out of javascript. I've noticed a "weird" performance boost/blow depending on when/where I add boids to the system.
If I fill the boids array with 400 objects first and then start the animation (using requestAnimationFrame) I get a very decent 40-50fps in Chrome and Safari, and around 30fps in Firefox.
However if the boids array gets filled with objects (400 of them again) during the animation (for example by dragging on the screen), than no matter the browser the performance always drops to about 15-20fps.
In both cases I use boids.push( new Boid() ); to fill the boids array. In the first case I do it from within a for loop and in the second I do it from the mouseDown event handler.
Any idea why the performance of the first would be so much better?
You can find both of the examples here:
Version A and Version B
If your reuestAnimationFrame callback execution takes longer than ~10ms fps drops and you will janks. So the first method probably meets that budget or sometimes slightly takes more since you are not getting ~60fps; but the second method definitely exceed the ~10ms on that frame and the browser have to skip 2-3 frames to complete the execution of that callback therefor you get ~20fps;
Check out this article on google web fundamentals Rendering Performance

Calling update method vs having a setInterval

In my game engine, there are objects that need to be updated periodically. For example, a scene can be lowering its alpha, so I set an interval that does it. Also, the camera sometimes needs to jiggle a bit, which requires interpolation on the rotation property.
I see that there are two ways of dealing with these problems:
Have an update() method that calls all other object's update methods. The objects track time since they were last updated and act accordingly.
Do a setInterval for each object's update method.
What is the best solution, and why?
setInterval does not keep to a clock, it just sequences events as they come in. Browsers tend to keep at least some minor amount of time between events. So if you have 10 events that all need to fire after 100ms you'll likely see the last event fire well into the 200ms. (This is easy enough to test).
Having only one event (and calling update on all objects) is in this sense better than having each object set it's own interval. There may be other considerations though but for at least this reason option 2 is unfeasible.
Here is some more about setInterval How do browsers determine what time setInterval should use?
The best way I have found out to make a good update() function and keeping a good framerate and less load is as following.
Have a single update() method which draws your frame, by looping some sort of queue/schedule of all drawable object his own update() function which are added to this update event queue/ schedule. (eventlistener)
This way you don't have to loop all objects which are not scheduled for a redraw/update (like menu buttons or crosshairs). And you don't have an over abundance of intervals running for all drawable objects.
I recommend using the update() method over the setInterval.
Also, I would guess that the timing on the several setintervals running would be unreliable.
Another possibility, depending on what other things are happening in your game, using a bunch of separate intervals could introduce race conditions in the counting and comparing of scoring, etc
The proposed algorithms proposed are not exclusive to the related method. That is, you can use setInteval to call all the update methods, or you can have each object update itself by repeatedly calling setTimeout.
More to the point is that a single timer is less overhead than multiple timers (of either type). This really matters when you have lots of timers. On the other hand, only one timer may not suit because some objects might need to be updated more frequently than others, or to a different schedule, so just try to minimise them.
An advantage with setTimeout is that the interval to the next call can be adjusted to meet specific scheduling requirements, e.g. if one is delayed you can skip the next one or make it sooner. setInterval will slowly drift relative to a consistent clock and one–of adjustments are more difficult.
On the other hand, setInteval only needs to be called once so you don't have to keep calling the timer. You may end up with a combination.

How do I use javascript to figure out if my animation's too slow and instead use instant changes?

I have a web page that uses javascript+css animation. On many computers, this will run very well, but on some slower systems (like tablets) it can be awful. In those cases I want to gracefully degrade to simply moving the objects to their final position instantly without animation. My problem is that:
I don't want to force a user to have to make the choice (I want to figure out if animation will be too slow programatically)
I don't want them to suffer through 5-10 seconds of super slow animations while my code tries to figure out the FPS of the javascript animations. Instead I want a fast way of figuring this out.
Is this even possible? Or do I have to run animations for 3-5 seconds to figure out if animations will run well on their system? (Checking it in 1 second or less means hiccups in their system will kill animation incorrectly)
All I've come up with so far is something like this (pseudo code):
Strarting the Animation
//Total number of steps in our animation (This is calculated normally)
var totalSteps = 20;
//Amount of time between each setInterval/setTimeout
var delay = 20; //milliseconds
//Total expected time to animate entire process
var expectedDuration = dealy * totalSteps;
//Our starting time
var startTime = new Date().getTime();
/*** Start the animation ***/
startAnimation(...);
At Final step of animation
//Current time
var endTime = new Date().getTime();
//Actual time it took
var actualDuration = endTime - startTime;
//Check difference in expected time vs actual time
var diff = actualDuration - expectedDuration;
The problem is that code forces me to run animation for a bit first, thus subjecting users on slow systems to slow, jerky animations for a bit.
Is there a way to do this?
My suggestion would be this:
1) Use an animation function that runs for a fixed time and adjusts the step size according to how fast each step is running. This will NEVER subject a user to a long animation that takes longer than intended. It may be choppy, but it won't be slow to finish. The general idea for this type of animation algorithm is that you set a total time for the animation, you calculate an expected number of steps. You start running each step, but at each step you check the elapsed time to see if you are behind schedule. If you are behind schedule (because the host computer is too slow), you jump forward the amount needed to get you back on schedule. This jump forward makes the animation choppier than desired, but keeps you on schedule. All animation libraries I've seen like jQuery and YUI work this way.
2) In each of the first few animations (done the way above), accumulate a stepCnt that tells you how many steps were done in each animation in the fixed time.
3) From some experience running your animations on fast and slow devices, figure out what stepCnt value signifies a performance that is slow enough that your UI would be better off without the animation at all.
4) Make your code adaptive. If, after the first few animations from step 2), you see that the stepCnt is below your threshold (that you determined in step 3), then set a global flag that you want to skip animations so all future animations will just go right to the end state.
In my experience, slowness is always tied to a particular browser or OS. So just find out what browser/OS configurations are running too slow, and reduce the animation steps only for those.
This way users don't have to wait for you code to figure things out before getting a good experience.
One way to find out what browsers/OSs are slow, is to use Google Analytics (or any other tracking API), and track an event when the animation starts, then another one when the animation ends. You can then easily calculate the average animation run time for each configuration and act accordingly.

Scheduling update "threads" in JS / WebGL

Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.

Categories