How to make GIF rotate when the tree is loading in Javascript - javascript

I have a tree that gets populated through the web service - this part is super fast, the part that's a bit slower is populating the tree...I have a gif rotating image that rotates while the service is loading. Since I use ajaxStop and ajaxStart trigger, the gif stops rotating after ajax request has completed, which is correct. However, because the loading takes a split second, the gif freezes for that split second which looks unprofessional.
How do I make the gif rotate until the tree is finished loading?

Browsers give a low priority to image refreshing, so in the time your code is manipulating/inserting in the DOM, the browser is busy with that and doesn't have time to repaint the image.
There's not a whole lot you can do, besides optimizing your code so that the processing you're doing with the ajax data is less intensive, or for example if you're getting a list of 1000 items, insert them in the page in intervals of 50, with a small delay between each, so the browser has time to repaint.
YMMV, maybe it looks great as is in Chrome, but freezes for 5 seconds in IE.

Browsers won't typically update images whilst JavaScript code is executing. If you need the spinner to continue animating during DOM population, your population function will have to give up control back to the browser several times a second to let it update the image, typically by setting a timeout (with no delay, or a very short delay) that calls back into the population process, and then returning.
Unfortunately this will usually make your population function much more complicated, as you have to keep track of how far you've got in the population process in variables instead of relying on loops and conditional structures to remember where you are. Also, it will be slightly slower to run, depending on how you're populating the page structures, and if there are click or other events that your application might get delivered half-way through population you can end up with nasty race conditions.
IMO it would probably be better to stop the spinner and then update the DOM. You'll still get the pause, but without the spinner stuttering to a halt it won't be as noticeable. To give the browser a chance to update the spinner after ajaxStop has changed its src, use a zero-delay-timeout-continuation in your AJAX callback function so that on completion the browser gets a chance to display the altered spinner before going into the lengthy population code.
Making this population step faster is definitely worthwhile, if a slightly different topic. (Appending lots of DOM elements one after the other is inherently slow as each operation has to spend more time trudging through list operations. Appending lots of DOM elements all at once via a DocumentFragment is fast, but getting all those DOM elements into the fragment in the first place might not be. Parsing the entire innerHTML at once is generally fast, but generating HTML without injection security holes is an annoyance; serialising and re-parsing via innerHTML+= is slower and totally awful. IE/HTML5 insertAdjacentHTML is fast, but needs fallback implementation for many browsers: ideally fast Range manipulation, falling back to slow node-by-node DOM calls for browsers with no Range. Don't expect jQuery's append to do this for you; it is as slow as node-by-node DOM operations because that's exactly what it's doing.)

While manipulating the DOM on the fly is really tedious for a lot of browser (especially older one) you might want to optimize what you are doing there as much as you can.
Also, another good idea would be to make sure you are running jQuery 1.4 which is a lot faster for doing such operations.
You can see useful benchmark(1.3 vs 1.4) done by the jQuery team that illustrates that here:
http://jquery14.com/day-01/jquery-14

Related

Is DOM rendering GUARANTEED to block during a single (synchronous) function's execution?

DOM blocking is something many people not familiar with JavaScript's strictly single-threaded synchronous execution model find out about the hard way, and it's usually just something we want to work around somehow (using timeouts, web-workers, etc). All well and good.
However, I would like to know if blocking of the actual user-visible rendering is something you can actually rely on. I'm 90% sure it is de facto the case in most browsers but I am hoping this isn't just a happily consistent accident. I can't seem to find any definitive statements from DOM specifications or even vendor documentation like MDM.
What worries me slightly is that while changes to the DOM are indeed not visible looking at the page, the internal DOM geometry (including CSS transforms and filters) does actually update during synchronous execution. For example:
console.log(element.getBoundingRect().width);
element.classList.add("scale-and-rotate");
console.log(element.getBoundingRect().width);
element.classList.remove("scale-and-rotate");
... will indeed report two different width values, though the page does not appear to flash. Synchronously waiting after the class is added (using a while loop) doesn't make the temporary changes visible either. Doing a Timeline trace in Chrome reveals that internally paint and re-paint is taking place just the same, which makes sense...
My concern is that, lacking a specific reason not, some browsers, like say, those dealing with underpowered mobile CPUs, may choose to actually reflect those internal calculations in the user-visible layout during that function's execution, and thus will result in an ugly "flash" during such temporary operations. So, more concretely, what I'm asking is: Do they have a specific reason not to?
(If you are wondering why I care about this at all, I sometimes need to measure calculated dimensions using getBoundingRect for elements in a certain state to plan out spacing or animations or other such things, without actually putting them in that state or animating them first...)
According to various sources, getting the position or size of a DOM element will trigger a reflow of the output if necessary, so that the returned values are correct. As a matter of fact, reading the offsetHeight of an element has become a way to force a reflow, as reported by Alexander Skutin and Daniel Norton.
Paul Irish gives a list of several actions that cause a reflow. Among them are these element box metrics methods and properties:
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight,
elem.offsetParent elem.clientLeft, elem.clientTop, elem.clientWidth,
elem.clientHeight elem.getClientRects(), elem.getBoundingClientRect()
Stoyan Stefanov describes strategies used by browsers to optimize reflows (e.g. queueing DOM changes and performing them in batches), and adds the following remark:
But sometimes the script may prevent the browser from optimizing the
reflows, and cause it to flush the queue and perform all batched
changes. This happens when you request style information, such as
offsetTop, offsetLeft, offsetWidth, offsetHeight
scrollTop/Left/Width/Height
clientTop/Left/Width/Height
getComputedStyle(), or currentStyle in IE
All of these above are essentially requesting style information about
a node, and any time you do it, the browser has to give you the most
up-to-date value. In order to do so, it needs to apply all scheduled
changes, flush the queue, bite the bullet and do the reflow.
There is nothing in Javascript related to concurrency that is anything but de facto. JS simply does not define a concurrency model. Everything is happy accident or years of consensus.
That said, if your function does not make any calls to weird things like XMLHttpRequest or "alert" or something like that, you can basically treat it as single-threaded with no interrupts.

Animating multiple DIV-Elements with JS and the DOM results in a low Framerate

Preface
I just started programming with javascript and I am currently working on this hobby web-site project of mine. The site is supposed to display pages filled with product images than can be "panned" to either the left or right. Each "page" containing about 24 medium sized pictures, one page almost completely fills out an entire screen. When the user chooses to look at the next page he needs to click'n'drag to the left (for example) to let a new page (dynamically loaded through an AJAX script) slides into the view.
The Issue
This requires for my javascript to "slide" two of these mentioned pages synchronously by the width of a screen. This results in a really low framerate. Firefox and Opera lag a bit, Chrome has it especially bad: 1 frame of animation takes approx. 100 milliseconds, thus making the animation look very "laggy".
I do not use jQuery, nor do I want to use it or any other library to "do the work for me". At least not until I know for sure that what I am trying to do can not be done with a couple of lines of self-written code.
So far I have figured out that the specific way I manipulate the DOM is causing the performance-drop. The routine looks like this:
function slide() {
this.t = new Date().getTime() - this.msBase;
if( this.t > this.msDura ) {
this.callB.call(this.ref,this.nFrames);
return false;
}
//calculating the displacement of both elements
this.pxProg = this.tRatio * this.t;
this.eA.style.left = ( this.pxBaseA + this.pxProg ) + 'px';
this.eB.style.left = (this.pxBaseB + this.pxProg) + 'px';
if ( bRequestAnimationStatus )
requestAnimationFrame( slide.bind(this) );
else
window.setTimeout( slide.bind(this), 16 );
this.nFrames++;
}
//starting an animation
slide.call({
eA: theMiddlePage,
eB: neighboorPage,
callB: theCallback,
msBase: new Date().getTime(),
msDura: 400,
tRatio: ((0-pxScreenWidth)/400),
nFrames: 0,
ref: myObject,
pxBaseA: theMiddlePage.offsetLeft,
pxBaseB: neighboorPage.offsetLeft
});
Question
I noticed that when I let the AJAX script load less images into each page, the animation becomes much faster. The separate images seem to create more overhead than I have expected.
Is there another way to do this?
THE JAVASCRIPT SOLUTION
OK, there are two possible things for you to try to speed this up.
First of all, when you modify the style of an element, you force the browser to rerender the page. This is called a repaint. Certain changes also force a recalculation of the page's geometry. This is called a reflow. A reflow always triggers a repaint immediately after it. I think why you're experiencing worse performance with more elements is that each update to each one triggers at least a repaint. What is normally recommended in the case of modifying multiple styles on a single element is to either do them all at once by adding or removing a class, or hide the element, do your manipulations, and then show it, which means only two repaints (and possibly reflows).
It appears that in this particular case, you're probably already doing just fine, as you're only manipulating each item once per iteration.
Second of all, requestAnimationFrame() is good for doing animations on a canvas element, but seems to have somewhat dodgy performance on DOM elements. Open your page in Chrome's profiler to see exactly where it hangs up. Try JUST using setTimeout() to see if that ends up being the case. Again, the profiler will tell you where the hang-ups are. requestAnimationFrame() should be better, but validate this. I've had it backfire on me, before.
THE REAL SOLUTION
Don't do this in JavaScript, if you can at all avoid it. Use CSS transitions and translation to do the animation with a JavaScript function registered as the onTransitionEnd event handler for each animated element. Letting the browser do this stuff natively is almost always faster than any JS code anyone can write.
The only catch is that CSS3 animations are only supported by the newer browers. For your own edification, do it this way. For real, practical applications, delegate to a library. A good library will do it natively, if possible, and fall back to the best way of doing it in JS for the older browsers.
Good link to read regarding this stuff: http://www.html5rocks.com/en/tutorials/speed/html5/

Async loading of Typekit :: is it worth it, or better not to use it at all?

Trying to get page-load time down.
I followed the third example outlined here to asynchronously load the TypeKit javascript.
To make it work you have to add a .wf-loading #some-element {visibility: hidden;} to each element that uses the font, and after either 1) it loads or 2) after a set time (1 sec), the font becomes visible.
The thing is, the CSS I'm working with has the font assigned to about 200 elements, so thats 200 elements of .wf-loading{ } (note: I did not write this CSS).
I feel this would slow the load time down more than just letting it load regularly, DOM traversing that much stuff. If this is the case, I will just axe Typekit altogether and go with a regular font.
Are there any tools I can use to run performance tests on this kind of stuff? Or has anyone tested these things out?
You're not actually modifying more than a single DOM element (the root ) with this approach. This means that our modern browsers will rely on their super fast CSS engines, so the number of elements involved will have no noticeable affect on page load.
As far as page load and flicker, network latency is usually an order of magnitude worse than DOM manipulation. There will always be some flicker on the very first (unprimed) page load while the browser waits for the font to download. Just make sure your font being cached for reuse, and try to keep it's file size as small as possible.
I went down this path a few years ago with Cufon. In the end, I chose the simplest path with acceptable performance and stopped there. It's easy to get caught up in optimizing page loads, but there's probably more promising areas for improvement – features, bugs, refactoring, etc.
The problem is, as they mention in the blog, the rare occasions (but it definitely does happen - certainly more than once for me) when the Typekit CDN fails completely and users just see a blank page. This is when you'll wish you'd used async loading.

Is there a way in Javascript to time a browser reflow?

I need to be able to benchmark a particular build of a webkit-based browser and am measuring the length of time it takes to do certain stuff like DOM manipulation, memory limits etc.
I have a test below which records the length of time it takes to simultaneously load in 10 fairly heavy PNG graphics. In code, I need to be able to time how long it takes for the load to finish. I have tried setting
the onLoad function on the dynamic Image object to produce a time in ms. However, as shown in the cap below it is giving an inaccurate reading because the reading it gives is a tiny amount due to it only recording the data transfer part of the load and then there is a considerable (3000+ms) delay for when the images are viewable - looped in blue, this is the browser reflow cycle.
Is there some event in webkit I can use to record when the browser has finished a reflow so that I can benchmark this? I have to be able to record the time in milliseconds in code because the build of webkit I am testing has no developer tools. I am able to observe the difference in Chrome ok but the performance between the two builds differs drastically and I need to be able to quantify it accurately for comparison.
If you are using jQuery, you could try recording the time between document ready and window load, that would give you an approximation.
(function(){
var start, end;
$(document).ready(function(){
start = new Date()
});
$(window).load(function(){
end = new Date();
console.log(end.getTime() - start.getTime());
});
}());
Edit:
Have you taken a look at the Browserscope reflow timer? Basically it checks to see how long it takes for the browser to return control to the JavaScript engine after changes to the dom. According the the page it should work in any browser, although I haven't tested it personally. Perhaps you could adapt the code run during the tests to time the reflow in your page.
Also, you might want to have a look at CSS Stress Test. The bookmarklet is really great for page performance testing. http://andy.edinborough.org/CSS-Stress-Testing-and-Performance-Profiling
How about setting the PNG as a div background-image and running the stress test, it should enable/disable the image multiple times with timings.

Scheduling update "threads" in JS / WebGL

Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.

Categories