Is there a standard (accepted/easy/performant) way to determine how fast a client machine renders javascript?
When I'm running web apps (videos, etc) on my other tabs my JS animations slow to a crawl.
If I could detect slowness from my JS, I would use simpler animations to provide a better user experience.
Update:
Removing animations for everyone is not the answer. I am talking about the simplest of animations which will stutter depending on browser / computer. If I could detect the level of slowness, I would simply disable them.
This is the same as video games with dynamic graphics quality: you want to please people with old computers without penalizing those who have the extra processing power.
One tip is to disable those hidden animations. if they are on another tab that is not in focus, what's the use of keeping them animated?
Another is to keep animations to a minimum. I assume you are on the DOM, and DOM operations are expensive. keep them to a minimum as well.
One tip I got somewhere is if you are using image animation manipulation, consider using canvas instead so that you are not operating on the DOM.
Also, consider progressive enhancement. Keep your features simple and work your way up to complicated things. Use the simple features as a baseline every time you add something new. That way, you can easily determine what causes the problem and fix it accordingly.
The main problem you should first address is why it is slow, not when it is slow.
I know this question is old, but I've just stumbled across it. The simplest way is to execute a long loop and measure the start and end time. This should give you some idea of the machine's Javascript performance.
Please bear in mind, this may delay page loading, so you may want to store the result in a cookie, so it's not measured on every visit to the page.
Something like:
var starttime = new Date();
for( var i=0; i<1000000; i++ ) ;
var dt = new Date() - starttime;
Hope this helps.
Related
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
Preface
I just started programming with javascript and I am currently working on this hobby web-site project of mine. The site is supposed to display pages filled with product images than can be "panned" to either the left or right. Each "page" containing about 24 medium sized pictures, one page almost completely fills out an entire screen. When the user chooses to look at the next page he needs to click'n'drag to the left (for example) to let a new page (dynamically loaded through an AJAX script) slides into the view.
The Issue
This requires for my javascript to "slide" two of these mentioned pages synchronously by the width of a screen. This results in a really low framerate. Firefox and Opera lag a bit, Chrome has it especially bad: 1 frame of animation takes approx. 100 milliseconds, thus making the animation look very "laggy".
I do not use jQuery, nor do I want to use it or any other library to "do the work for me". At least not until I know for sure that what I am trying to do can not be done with a couple of lines of self-written code.
So far I have figured out that the specific way I manipulate the DOM is causing the performance-drop. The routine looks like this:
function slide() {
this.t = new Date().getTime() - this.msBase;
if( this.t > this.msDura ) {
this.callB.call(this.ref,this.nFrames);
return false;
}
//calculating the displacement of both elements
this.pxProg = this.tRatio * this.t;
this.eA.style.left = ( this.pxBaseA + this.pxProg ) + 'px';
this.eB.style.left = (this.pxBaseB + this.pxProg) + 'px';
if ( bRequestAnimationStatus )
requestAnimationFrame( slide.bind(this) );
else
window.setTimeout( slide.bind(this), 16 );
this.nFrames++;
}
//starting an animation
slide.call({
eA: theMiddlePage,
eB: neighboorPage,
callB: theCallback,
msBase: new Date().getTime(),
msDura: 400,
tRatio: ((0-pxScreenWidth)/400),
nFrames: 0,
ref: myObject,
pxBaseA: theMiddlePage.offsetLeft,
pxBaseB: neighboorPage.offsetLeft
});
Question
I noticed that when I let the AJAX script load less images into each page, the animation becomes much faster. The separate images seem to create more overhead than I have expected.
Is there another way to do this?
THE JAVASCRIPT SOLUTION
OK, there are two possible things for you to try to speed this up.
First of all, when you modify the style of an element, you force the browser to rerender the page. This is called a repaint. Certain changes also force a recalculation of the page's geometry. This is called a reflow. A reflow always triggers a repaint immediately after it. I think why you're experiencing worse performance with more elements is that each update to each one triggers at least a repaint. What is normally recommended in the case of modifying multiple styles on a single element is to either do them all at once by adding or removing a class, or hide the element, do your manipulations, and then show it, which means only two repaints (and possibly reflows).
It appears that in this particular case, you're probably already doing just fine, as you're only manipulating each item once per iteration.
Second of all, requestAnimationFrame() is good for doing animations on a canvas element, but seems to have somewhat dodgy performance on DOM elements. Open your page in Chrome's profiler to see exactly where it hangs up. Try JUST using setTimeout() to see if that ends up being the case. Again, the profiler will tell you where the hang-ups are. requestAnimationFrame() should be better, but validate this. I've had it backfire on me, before.
THE REAL SOLUTION
Don't do this in JavaScript, if you can at all avoid it. Use CSS transitions and translation to do the animation with a JavaScript function registered as the onTransitionEnd event handler for each animated element. Letting the browser do this stuff natively is almost always faster than any JS code anyone can write.
The only catch is that CSS3 animations are only supported by the newer browers. For your own edification, do it this way. For real, practical applications, delegate to a library. A good library will do it natively, if possible, and fall back to the best way of doing it in JS for the older browsers.
Good link to read regarding this stuff: http://www.html5rocks.com/en/tutorials/speed/html5/
I'm working on a multiple projectile simulator for a college project and I'm wondering how to best setup timing in JavaScript rendering to a HTML5 canvas
I'm using an Euler integrator setup for physics and accuracy is very important for this project. The rendering is very bare bones
My question is how to best setup the timing for all this.
Right now I have:
The physics and other logic running in a function that loops using setTimeout() with a fixed time step
The rendering in another function that loops using a requestAnimationFrame() call (flexible time step)
These two loops run sort of simultaneously (I know JavaScript doesn't really support threads without Web Workers) but I don't want the rendering (currently running at a much higher FPS than needed) to be unnecessarily 'stealing' CPU cycles from the physics simulation, if you see what I mean.
Given that physics accuracy is most important here how would you recommend setting up the timing system? (Maybe using Web Workers would be useful here but I havent seen this used in other engines)
Thanks!
I'd suggest that you don't try to 'multithread' unless you're actually doing it, and even then, I wouldn't necessarily recommend it.
The best way to keep everything in synch is to have a single thread of execution. A single setTimeout loop of about 33ms seems to work ok for my games.
Also, in my experience at least, setTimeout offers a much more aesthetic experience than setInterval or requestAnimationFrame. With setInterval, Javascript tries to hard to 'catch up' when frames are delivered late, which makes animation frames inconsistent. With requestAnimationFrame, frames are skipped to ensure a smooth running game, which actually makes things harder, because your users aren't entirely sure their view is up to date at any given second.
One way would be to set an interval for processing physics, and once per x frames, render everything.
var physicsTime;
var renderFrequency;
var frameCount;
setInterval(function(){updateStuff()},physicsTime);
then in updateStuff()
function updateStuff(){
frameCount ++;
if (frameCount >= renderFrequency){
frameCount -= renderFrequency;
render();
}
physics();
}
I'm working on my new portfolio and I want to use a complex javascript (for animating, moving, effecting dom elements) and i going to do as much as possible optimization to maximize the performance. BUT I can't prepare for all the case with my site will be faced. So i started to looking for a script with I can check the browser performance (maximum in a few seconds) and based on the performance test results I can set the number of displayed and calculated effects on the page.
So is there any way to check browser performance and set the optimal number of applied effect on a page?
If possible, use CSS transforms/transitions instead of pure-js effects, as the former are usually hardware accelerated and thus orders of magnitude faster.
Even if you don't use CSS transforms, you can detect support for them using e.g. modernizr, and if supported, you can assume that the browser is very modern and has pretty good performance in general. Take a look at window.requestAnimationFrame, it automatically throttles the framerate.
My website has a jQuery script (from Shadow animation jQuery plugin) which constantly changes the colour of box shadow of various <div>s on the home page.
The animation is not essential, but it does take up a lot of CPU time on slower machines.
Is it possible to find out if the script will run 'too slowly'? I can then disable it before it impacts performance.
Is this even a good idea? If not, is there an easy way to break up the jQuery animate?
This may indirectly solve your problem. Pick a few algorithms and performance tests from this site http://dromaeo.com/ that seem similar to your jQuery plugin. Don't run comprehensive tests as they do on the site. Instead, pick fairly small and fast algorithms, and run them for an unnoticeable period of time.
Use a tiny predefined time span to limit how long these tests are allowed to run. Let's say if that span is 200 ms, and on a fast machine with browser A, you can get 100 iterations, while on some random user's machine, it's only able to complete 5 iterations, then you may want to consider disabling it on the user's machine. Tweak and tweak till you find the optimal numbers.
As a bonus, send all test results back to your server so you have a better idea of where your users lie in the speed spectrum. If a big majority of users are using slower computers and older browsers, then it just may make sense to remove the thing altogether.
You could probably do it by timing a few times round a loop which did some intensive processing on page load, but that's going to slow the page and add even further to CPU load, so it doesn't seem like a great solution.
A compromise I've used in the past, though, was to make the decision based on browser version, for example, Internet Explorer 6 users get simpler content whereas newer browsers with better JavaScript performance get the animation. That seemed to work pretty well at a practical level. In practice, browser choice is a big factor in JavaScript performance and you might get a 90% fit with what you want very simply just by taking that into account.
You could do something like $(window).width() to get the browser width. Using this you could make the assumption that anything < 1024px wide is likely to be either a netbook, smartphone or old computer.
This wouldnt be nearly as accurate as timing a loop, but much more efficient.
Obviously its this rule's a generalisation and there are will be slow computers with > 1024px. But in general a 1024px + computer would typically be able to handle a fair bit of javascript (untill the owner puts on loads of software, virus scans and browser toolbars!)
hope this is useful!