I'm working on a multiple projectile simulator for a college project and I'm wondering how to best setup timing in JavaScript rendering to a HTML5 canvas
I'm using an Euler integrator setup for physics and accuracy is very important for this project. The rendering is very bare bones
My question is how to best setup the timing for all this.
Right now I have:
The physics and other logic running in a function that loops using setTimeout() with a fixed time step
The rendering in another function that loops using a requestAnimationFrame() call (flexible time step)
These two loops run sort of simultaneously (I know JavaScript doesn't really support threads without Web Workers) but I don't want the rendering (currently running at a much higher FPS than needed) to be unnecessarily 'stealing' CPU cycles from the physics simulation, if you see what I mean.
Given that physics accuracy is most important here how would you recommend setting up the timing system? (Maybe using Web Workers would be useful here but I havent seen this used in other engines)
Thanks!
I'd suggest that you don't try to 'multithread' unless you're actually doing it, and even then, I wouldn't necessarily recommend it.
The best way to keep everything in synch is to have a single thread of execution. A single setTimeout loop of about 33ms seems to work ok for my games.
Also, in my experience at least, setTimeout offers a much more aesthetic experience than setInterval or requestAnimationFrame. With setInterval, Javascript tries to hard to 'catch up' when frames are delivered late, which makes animation frames inconsistent. With requestAnimationFrame, frames are skipped to ensure a smooth running game, which actually makes things harder, because your users aren't entirely sure their view is up to date at any given second.
One way would be to set an interval for processing physics, and once per x frames, render everything.
var physicsTime;
var renderFrequency;
var frameCount;
setInterval(function(){updateStuff()},physicsTime);
then in updateStuff()
function updateStuff(){
frameCount ++;
if (frameCount >= renderFrequency){
frameCount -= renderFrequency;
render();
}
physics();
}
Related
For educational purposes I need to compare the performance of WebGL with OpenGL. I have two equivalent programs written in WebGL and OpenGL, now I need to take the frame rate of them and compare them.
In Javascript I use requestAnimationFrame to animate, and I noticed that it causes the frame rate to be always at 60 FPS, and it goes down only if I switch tab or window. On the other hand if I always call the render function recursively, the window freezes for obvious reasons.
This is how I am taking the FPS:
var stats = new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.left = '450px';
stats.domElement.style.top = '750px';
document.body.appendChild( stats.domElement );
setInterval( function () {
stats.begin();
stats.end();
}, 1000 / 60 );
var render= function() {
requestAnimationFrame(render);
renderer.render(scene,camera);
}
render();
Now the problem if having always the scene at 60 FPS is that I cannot actually compare it with the frame rate of OpenGL, since OpenGL redraws the scene only when it is somehow modified (for example if I rotate the object) and glutPostRedisplay() gets called.
So I guess if there is a way in WebGL to redraw the scene only when it is necessary, for example when the object is rotated or if some attributes in the shaders are changed.
You can't compare framerates directly across GPUs in WebGL by pushing frames. Rather you need to figure out how much work you can get done within a single frame.
So, basically pick some target framerate and then keep doing more and more work until you go over your target. When you've hit your target that's how much work you can do. You can compare that to some other machine or GPU using the same technique.
Some people will suggest using glFinish to check timing. Unfortunately that doesn't actually work because it stalls the graphics pipeline and that stalling itself is not something that normally happens in a real app. It would be like timing how fast a car can go from point A to point B but instead of starting long before A and ending long after B you slam on the brakes before you get to B and measure the time when you get to B. That time includes all the time it took to slow down which is different on every GPU and different between WebGL and OpenGL and even different for each browser. You have no way of knowing how much of the time spent is time spent slowing down and how much of it was spent doing the thing you actually wanted to measure.
So instead, you need to go full speed the entire time. Just like a car you'd accelerate to top speed before you got to point A and keep going top speed until after you pass B. The same way they time cars on qualifying laps.
You don't normally stall a GPU by slamming on the breaks (glFinish) so adding the stopping time to your timing measurements is irrelevant and doesn't give you useful info. Using glFinish you'd be timing drawing + stopping. If one GPU draws in 1 second and stops in 2 and another GPU draws in 2 seconds and stops in 1, your timing will say 3 seconds for both GPUs. But if you ran them without stopping one GPU would draw 3 things a second, the other GPU would only draw 1.5 things a second. One GPU is clearly faster but using glFinish you'd never know that.
Instead you run full speed by drawing as much as possible and then measure how much you were able to get done and maintain full speed.
Here's one example:
http://webglsamples.org/lots-o-objects/lots-o-objects-draw-elements.html
It basically draws each frame. If the frame rate was 60fps it draws 10 more objects the next frame. If the frame rate was less than 60fps it draws less.
Because browser timing is not perfect you might be to choose a slightly lower target like 57fps to find how fast it can go.
On top of that, WebGL and OpenGL really just talk to the GPU and the GPU does the real work. The work done by the GPU will take the exact same amount of time regardless of if WebGL asks the GPU to do it or OpenGL. The only difference is in the overhead of setting up the GPU. That means you really don't want to draw anything heavy. Ideally you'd draw almost nothing. Make your canvas 1x1 pixel, draw a single triangle, and check the timing (as in how many single triangles can you draw one triangle at a time in WebGL vs OpenGL at 60fps).
It gets even worse though. A real app will switch shaders, switch buffers, switch textures, update attributes and uniforms often. So, what are you timing? How many times you can call gl.drawBuffers at 60fps? How many times you can call gl.enable or gl.vertexAttribPointer or gl.uniform4fv at 60fps? Some combination? What's a reasonable combination? 10% calls to gl.verterAttribPointer + 5% calls to gl.bindBuffer + 10% calls to gl.uniform. The timing of those calls are the only things different between WebGL and OpenGL since ultimately they're talking to the same GPU and that GPU will run the same speed regardless.
You actually do not want to use framerate to compare these things because as you just mentioned you are artificially capped to 60 FPS due to VSYNC.
The number of frames presented will be capped by the swap buffer operation when VSYNC is employed and you want to factor that mess out of your performance measurement. What you should do is start a timer at the beginning of your frame, then at the end of the frame (just prior to your buffer swap) issue glFinish (...) and end the timer. Compare the number of milliseconds to draw (or whatever resolution your timer measures) instead of the number of frames drawn.
The correct solution is to use the ANGLE_timer_query extension when available.
Quoting from the specification:
OpenGL implementations have historically provided little to no useful
timing information. Applications can get some idea of timing by
reading timers on the CPU, but these timers are not synchronized with
the graphics rendering pipeline. Reading a CPU timer does not
guarantee the completion of a potentially large amount of graphics
work accumulated before the timer is read, and will thus produce
wildly inaccurate results. glFinish() can be used to determine when
previous rendering commands have been completed, but will idle the
graphics pipeline and adversely affect application performance.
This extension provides a query mechanism that can be used to
determine the amount of time it takes to fully complete a set of GL
commands, and without stalling the rendering pipeline. It uses the
query object mechanisms first introduced in the occlusion query
extension, which allow time intervals to be polled asynchronously by
the application.
(emphasis mine)
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
I've seen a couple of questions asking about this, but they're all over three years old and usually end by saying theres not much of a way around it yet, so im wondering if anything's changed.
I'm currently working on a game that draws onto a canvas using an interval that happens 60 times a second. It works great on my iphone and PC, which has a faily decent graphics card, but I'm now trying it on a Thinkcentre with intel i3 graphics, and I notice some huge screen tearing:
http://s21.postimg.org/h6c42hic7/tear.jpg - it's a little harder to notice as a still.
I was just wondering if there's any way to reduce that, or to easily enable vertical sync. If there isnt, is there somethingthat I could do in my windows 8 app port of the game?
Are you using requestAnimationFrame (RAF)? RAF will v-sync but setTimeout/setInterval will not.
http://msdn.microsoft.com/library/windows/apps/hh920765
Also, since 30fps is adequate for your users to see smooth motion, how about splitting your 60fps into 2 alternating parts:
"calculate/update" during one frame (no drawing)
and then do all the drawing in the next frame.
And, get to know Chrome's Timeline tool. This great little tool lets you analyze your code to discover where your code is taking the most time. Then refactor that part of your code for high performance.
[ Addition: More useful details about requestAnimationFrame ]
Canvas does not paint directly to the display screen. Instead, canvas "renders" to a temporary offscreen buffer. “Rendering” means the process of executing canvas commands to draw on the offscreen buffer. This offscreen buffer will be quickly drawn to the actual display screen when the next screen refresh occurs.
Tearing occurs when the offscreen rendering process is only partially complete when the offscreen buffer is drawn on the actual display screen during refresh.
setInterval does not attempt to coordinate rendering with screen refresh. So, using setInterval to control animation frames will occasionally produce tearing .
requestAnimationFrame (RAF) attempts to fix tearing by generating frames only between screen refreshes (a process called vertical synching). The typical display refreshes about 60 times per second (that’s every 16 milliseconds).
With requestAnimationFrame (RAF):
If the current frame is not fully rendered before the next refresh,
RAF will delay the painting of the current frame until the next screen refresh.
This delay reduces tearing.
So for you, RAF will likely help your tearing problem, but it also introduces another problem.
You must decide how to handle your physics processing:
Keep it in a separate process—like setInterval.
Move it into requestAnimationFrame.
Move it into web-workers (the work is done on a background thread separate from the UI thread).
Keep physics in a separate setInterval.
This is a bit like riding 2 trains with 1 leg on each—very difficult! You must be sure that all aspects of the physics are always in a valid state because you never know when RAF will read the physics to do rendering. You will probably have to create a “buffer” of your physics variables so they always are in a valid state.
Move physics into RAF:
If you can both calculate physics and render within the 16ms between refreshes, this solution is ideal. If not, your frame may be delayed until the next refresh cycle. This results in 30fps which is not terrible since the eye still perceives lucid motion at 30fps. Worst case is that the delay sometimes occurs and sometimes not—then your animation may appear jerky. So the key here is to spread the calculations as evenly as possible between refresh cycles.
Move physics into web workers
Javascript is single-threaded. Both the UI and calculations must run on this single thread. But you can use web workers which run physics on a separate thread. This frees up the UI thread to concentrate on rendering and painting. But you must coordinate the background physics with the foreground UI.
Good luck with your game :)
Is there a standard (accepted/easy/performant) way to determine how fast a client machine renders javascript?
When I'm running web apps (videos, etc) on my other tabs my JS animations slow to a crawl.
If I could detect slowness from my JS, I would use simpler animations to provide a better user experience.
Update:
Removing animations for everyone is not the answer. I am talking about the simplest of animations which will stutter depending on browser / computer. If I could detect the level of slowness, I would simply disable them.
This is the same as video games with dynamic graphics quality: you want to please people with old computers without penalizing those who have the extra processing power.
One tip is to disable those hidden animations. if they are on another tab that is not in focus, what's the use of keeping them animated?
Another is to keep animations to a minimum. I assume you are on the DOM, and DOM operations are expensive. keep them to a minimum as well.
One tip I got somewhere is if you are using image animation manipulation, consider using canvas instead so that you are not operating on the DOM.
Also, consider progressive enhancement. Keep your features simple and work your way up to complicated things. Use the simple features as a baseline every time you add something new. That way, you can easily determine what causes the problem and fix it accordingly.
The main problem you should first address is why it is slow, not when it is slow.
I know this question is old, but I've just stumbled across it. The simplest way is to execute a long loop and measure the start and end time. This should give you some idea of the machine's Javascript performance.
Please bear in mind, this may delay page loading, so you may want to store the result in a cookie, so it's not measured on every visit to the page.
Something like:
var starttime = new Date();
for( var i=0; i<1000000; i++ ) ;
var dt = new Date() - starttime;
Hope this helps.
Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.