I'm developing a web game which uses a fixed physics timestep.
Currently I am doing all my physics and rendering in a requestAnimationFrame function.
This works perfectly, but requestAnimationFrame suspends and stops running when the user tabs out (to save processor performance I suppose).
As a result, if you tab out for like 30 minutes and then come back, the game has to simulate 30 minutes of physics in order to get back to the right spot.
This is very bad, as it causes the tab to freeze while the CPU cranks to 100% for the next 3 minutes while it runs all those physics calculations.
I think the only solution is to make the physics run while the tab is inactive, but I'm not sure how. I suppose I should only have the rendering in requestAnimationFrame, because it doesn't matter if that gets missed. But the physics need to be maintained.
Apparently setInterval also suspends when the user tabs out.
So, how should this be handled? I need a way to run physics every 16.6667 milliseconds regardless of whether the user is tabbed in or not. Is this possible? I've heard about WebWorkers but is that the actual solution? To put the entire game inside a WebWorker? What about when I want to change the DOM and things like that? Seems like a bit much.
Related
From my tests, and from searching to find out more about the problem, my best guess is that css animations may be using a different physical clock from the one used to stream audio. If so perhaps the answer to this is that it can't be done, but am asking in case I am missing anything.
It is for my online metronome here.
It is able to plays notes reasonably accurately in response to an event listener for the css animationiteration event. The eventlistener is set up using e.g.
x.addEventListener("animationstart",playSoundBall2);
See here.
However if I try to synchronize the bounce with the sample precise timing of the AudioContext method that's when I run into problems.
What I do is to use the first css callback just to record the audio context time for the css elapsed time of 0. Then I play the notes using the likes of:
oscillator.start(desired_start_time);
You can try it out with the option on the page: "Schedule notes in advance for sample-precise timing of the audio" on the page here.
You can check how much it drifts by switching on "Add debug info to extra info" on the same page.
On my development machine it works fine with Firefox. But on Edge and Chrome it drifts away from the bounce. And not in a steady way. Can be fine for several minutes and then the bounce starts to slow down relative to the audio stream.
It is not a problem with browser activity - if I move the browser around and try to interrupt the animation the worst that happens is that it may drop notes and if the browser isn't active it is liable to drop notes. But the ones it plays are exactly in time.
My best guess so far, is that it might be that the browser is using the system time, while the audiocontext play method is scheduling it at a precise point in a continuous audio stream. Those may well be using different hardware clocks, from online searches for the problem.
Firefox may for some reason be using the same hardware clock, maybe just on my development machine.
If this is correct, it rather looks as if there is no way to guarantee to precisely synchronize html audio played using AudioContext with css animations.
If that is so I would also think you probably can't guarantee to synchronize it with any javascript animations as it would depend on which clocks the browser uses for the animations, and how that relates to whatever clock is used for streaming audio.
But can this really be the case? What do animators do who need to synchronize sound with animations for long periods of time? Or do they only ever synchronize them for a few minutes at a time?
I wouldn't have noticed if it weren't that the metronome naturally is used for long periods at a time. It can get so bad that the click is several seconds out from the bounce after just two or three minutes.
At other times - well while writing this I've had the metronome going for ten minutes in my Windows 10 app and it has drifted, but only by 20-30 ms relative to the bounce. So it is very irregular, so you can't hope to solve this by adding in some fixed speed up or slow down to get them in time with each other.
I am writing this just in case there is a way to do this in javascript, anything I'm missing. I'm also interested to know if it makes any difference if I use other methods of animation. I can't see how one could use the audio context clock directly for animation as you can only schedule notes in the audio stream, can't schedule a callback at a particular exact time in the future according to the audio stream.
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
I've been updating a Node.JS FFI to SDL to use SDL2. (https://github.com/Freezerburn/node-sdl/tree/sdl2) And so far, it's been going well and I can successfully render 1600+ colored textures without too much issue. However, I just started running into an issue that I cannot seem to figure out, and does not seem to have anything to do with the FFI, GC, speed of Javascript, etc.
The problem is that when I call SDL_RenderPresent with VSYNC enabled, occasionally, every few seconds, this call will take 20-30 or more milliseconds to complete. And it looks like this is happening 2-3 times in a row. This causes a very brief, but noticeable, visual hitch in whatever is moving on the screen. The rest of the time, this call will take the normal amount of time to display whatever was drawn to the screen at the correct time to be synced up with the screen, and everything looks very smooth.
You can see this in action if you clone the repository mentioned above. Build it with node-gyp, then just run test.js. (I can embed the test code into StackOverflow, but I figured it would be easier to just have the full example on GitHub) Requires SDL2, SDL2_ttf, SDL2_image to be in /Library/Frameworks. (this is still in development, so there's nothing fancy put together for finding SDL2 automatically, or having the required code in the repository, or pulled from somewhere, etc.)
EDIT: This should likely go under the gamedev StackExchange site. Don't know if it can be moved/link or not.
Doing some more research online, I've discovered what the "problem" was. This was something I'd never really encountered before, (somehow) so I thought it was some obvious problem where I was not using SDL correctly.
Turns out, graphics being "jittery" is a problem every game can/does face, and there are common ways to get around it. Basically, the problem is that a CPU cannot run every process/thread in the OS completely in parallel. Sometimes a process has to be paused in order to run something else. When this happens during a frame update, it can cause that frame to take up to twice as long as normal to actually be pushed to the screen. This is where the jitter comes from. It became most obvious that this was the problem after reading a Unity question about a similar jitter, where a commenter pointed out that running something such as the Activity Monitor on OS X will cause the jitter to happen regularly, every couple seconds. About the same amount of time between when the Activity Monitor polls all the running processes for information. Killing the Activity Monitor caused the jitter to be much less regular.
So there is no real way to guarantee that your code will be run every 16 milliseconds on the dot, and that it will always ever be another 16 milliseconds before your code gets run again. You have to separate the timing for code that handles events, movement, AI, etc. from the timing for when a new frame will be rendered in order to get a perfectly smooth experience. This generally means that you will run all your logic fewer times per second than you will be drawing frames, and then predicting where every object will be in between actual updates, and draw the object in that spot. See deWiTTERS game loop article for some more concrete details on this, on top of a fantastic overview of game loops in general.
Note that this prediction method of delivering a smooth game experience does not come without problems. The main one being that if you are displaying an object in a predicted location without actually doing the full collision detection on it, that object could very easily clip into other objects for a few frames. In the pong clone I am writing to test the SDL bindings, with the predicted object drawing, if I hold right while up against a wall the paddle will repeatedly clip into the wall before popping back out as location is predicted to be further than it is allowed. This is a separate problem that has to be dealt with in a different way. I am just letting the reader know of this problem.
I've seen a couple of questions asking about this, but they're all over three years old and usually end by saying theres not much of a way around it yet, so im wondering if anything's changed.
I'm currently working on a game that draws onto a canvas using an interval that happens 60 times a second. It works great on my iphone and PC, which has a faily decent graphics card, but I'm now trying it on a Thinkcentre with intel i3 graphics, and I notice some huge screen tearing:
http://s21.postimg.org/h6c42hic7/tear.jpg - it's a little harder to notice as a still.
I was just wondering if there's any way to reduce that, or to easily enable vertical sync. If there isnt, is there somethingthat I could do in my windows 8 app port of the game?
Are you using requestAnimationFrame (RAF)? RAF will v-sync but setTimeout/setInterval will not.
http://msdn.microsoft.com/library/windows/apps/hh920765
Also, since 30fps is adequate for your users to see smooth motion, how about splitting your 60fps into 2 alternating parts:
"calculate/update" during one frame (no drawing)
and then do all the drawing in the next frame.
And, get to know Chrome's Timeline tool. This great little tool lets you analyze your code to discover where your code is taking the most time. Then refactor that part of your code for high performance.
[ Addition: More useful details about requestAnimationFrame ]
Canvas does not paint directly to the display screen. Instead, canvas "renders" to a temporary offscreen buffer. “Rendering” means the process of executing canvas commands to draw on the offscreen buffer. This offscreen buffer will be quickly drawn to the actual display screen when the next screen refresh occurs.
Tearing occurs when the offscreen rendering process is only partially complete when the offscreen buffer is drawn on the actual display screen during refresh.
setInterval does not attempt to coordinate rendering with screen refresh. So, using setInterval to control animation frames will occasionally produce tearing .
requestAnimationFrame (RAF) attempts to fix tearing by generating frames only between screen refreshes (a process called vertical synching). The typical display refreshes about 60 times per second (that’s every 16 milliseconds).
With requestAnimationFrame (RAF):
If the current frame is not fully rendered before the next refresh,
RAF will delay the painting of the current frame until the next screen refresh.
This delay reduces tearing.
So for you, RAF will likely help your tearing problem, but it also introduces another problem.
You must decide how to handle your physics processing:
Keep it in a separate process—like setInterval.
Move it into requestAnimationFrame.
Move it into web-workers (the work is done on a background thread separate from the UI thread).
Keep physics in a separate setInterval.
This is a bit like riding 2 trains with 1 leg on each—very difficult! You must be sure that all aspects of the physics are always in a valid state because you never know when RAF will read the physics to do rendering. You will probably have to create a “buffer” of your physics variables so they always are in a valid state.
Move physics into RAF:
If you can both calculate physics and render within the 16ms between refreshes, this solution is ideal. If not, your frame may be delayed until the next refresh cycle. This results in 30fps which is not terrible since the eye still perceives lucid motion at 30fps. Worst case is that the delay sometimes occurs and sometimes not—then your animation may appear jerky. So the key here is to spread the calculations as evenly as possible between refresh cycles.
Move physics into web workers
Javascript is single-threaded. Both the UI and calculations must run on this single thread. But you can use web workers which run physics on a separate thread. This frees up the UI thread to concentrate on rendering and painting. But you must coordinate the background physics with the foreground UI.
Good luck with your game :)
What I would like to do is run a simple and fast Javascript benchmark when a user first access my web application to detect his computer's capacity to animate elements. I remember a teacher did this in college to adapt the frame rate of his game depending on the user's machine.
Is it something that could be done without sacrificing too much user resources? Maybe a few seconds after the page is loaded? And if the results are not good, we just disable some or all animations for this user.
To answer the questions that you asked in your post:
1) It is possible to run a JavaScript benchmark when a person first loads your site that will test their capacity for CSS/JavaScript animation speeds, but it will sacrifice several resources. You have to consider the people who will be accessing your site. If they're going to be accessing your site via a mobile device, the last thing you want to do is bring their device to a standstill and either freeze their browser or bump other apps out of memory. You don't want either a desktop or mobile user to think that the site is unresponsive.
2) One of the issues with waiting a few seconds after the page has loaded is that, depending on what you want to do with the site, people will start to try to scroll through your content. If they start to scroll and realize that the site is "stuttering", they may think that something is wrong with their device.
Another caveat is that if you want the smoothest animations possible, you may end up using requestAnimationFrame (see https://hacks.mozilla.org/2011/08/animating-with-javascript-from-setinterval-to-requestanimationframe/). If somebody loads your site and then clicks away from it to a different tab while your test is running, browsers have adopted the practice of reducing the framerate in inactive tabs in favor of those in the active tab. Testing based on this (incorrect) framerate capability will result in parts of your site being disabled for people who are perfectly capable of running your site with all of the animations intact, so that's something you may need to take into account.
In some tests that I ran a couple of months ago, the CSS3 version of an animation was almost always smoother than the JavaScript version of an animation. However, there are now jQuery libraries that bridge this gap by using CSS3 animations when possible, such as jQuery Transit (http://ricostacruz.com/jquery.transit/).