I have some code which gets triggered in a callback on mouse scroll on an SVG
which makes some transformations to SVG (for zooming in and out to a point).
For some huge SVGs , the performance is sluggish and I want to improve this
and I am trying to measure the time taken from the time scroll starts till the time rendering is done.
I have the following code
var timerStart = Date.now();
//some calculations
svgElement.setAttribute("transform",newTransform);
console.log("rendered:", Date.now()-timerStart);
However I can see that rendering happens even after the log is printed.
I assumed that DOM manipulations are synchronous(javascript runtime is single threaded), it seems this is not the case?Is there some rendering queue which asynchronously does the rendering?
How can I accurately measure the performance of rendering in such cases?
Rendering doesn't just happen asynchronously, it happens concurrently (i.e. in another thread). So you can't accurately measure it from the JavaScript thread.
However, most modern browsers allow for profiling from the developer tools provided by them, to see what causes each render/layout/reflow and how to optimize your code.
Related
I'm building real time time layout.
If server sends packages every 200ms, then Function Call + Recalculate Style + Layout + Paint time must be less than 200ms.
By using performance.mark with performance.measure or just console.time('1') with console.timeEnd('1') i can measure Function Call what is not enough.
Is there any known way how to put some sort of anchors to get and log number that includes Paint?
That will be used for automated performance testing.
Thanks in advance!
It isn't possible to log Paint times using the Console API. It looks as though there was an attempt to integrate this into WebKit several years ago, but this never got implemented. Right now, you can only do CPU profiling using console.profile, but this isn't relevant for you.
You need to explicitly run the Timeline tool to gather Paint profiling data. You could look into using macros to run this. You can export your data into a JSON file, so that you can re-import and compare each one. It's not optimal for automated testing, but I'm not sure of other ways.
This isn't going to include the paint time itself, but requestAnimationFrame will be called just before the paint.
I also tried setTimeout(fn, 0), but it turned out to be called fairly long after the paint.
If all you want to know is that there is some time left at the end of a frame you might be able to use requestIdleCallback.
DOM blocking is something many people not familiar with JavaScript's strictly single-threaded synchronous execution model find out about the hard way, and it's usually just something we want to work around somehow (using timeouts, web-workers, etc). All well and good.
However, I would like to know if blocking of the actual user-visible rendering is something you can actually rely on. I'm 90% sure it is de facto the case in most browsers but I am hoping this isn't just a happily consistent accident. I can't seem to find any definitive statements from DOM specifications or even vendor documentation like MDM.
What worries me slightly is that while changes to the DOM are indeed not visible looking at the page, the internal DOM geometry (including CSS transforms and filters) does actually update during synchronous execution. For example:
console.log(element.getBoundingRect().width);
element.classList.add("scale-and-rotate");
console.log(element.getBoundingRect().width);
element.classList.remove("scale-and-rotate");
... will indeed report two different width values, though the page does not appear to flash. Synchronously waiting after the class is added (using a while loop) doesn't make the temporary changes visible either. Doing a Timeline trace in Chrome reveals that internally paint and re-paint is taking place just the same, which makes sense...
My concern is that, lacking a specific reason not, some browsers, like say, those dealing with underpowered mobile CPUs, may choose to actually reflect those internal calculations in the user-visible layout during that function's execution, and thus will result in an ugly "flash" during such temporary operations. So, more concretely, what I'm asking is: Do they have a specific reason not to?
(If you are wondering why I care about this at all, I sometimes need to measure calculated dimensions using getBoundingRect for elements in a certain state to plan out spacing or animations or other such things, without actually putting them in that state or animating them first...)
According to various sources, getting the position or size of a DOM element will trigger a reflow of the output if necessary, so that the returned values are correct. As a matter of fact, reading the offsetHeight of an element has become a way to force a reflow, as reported by Alexander Skutin and Daniel Norton.
Paul Irish gives a list of several actions that cause a reflow. Among them are these element box metrics methods and properties:
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight,
elem.offsetParent elem.clientLeft, elem.clientTop, elem.clientWidth,
elem.clientHeight elem.getClientRects(), elem.getBoundingClientRect()
Stoyan Stefanov describes strategies used by browsers to optimize reflows (e.g. queueing DOM changes and performing them in batches), and adds the following remark:
But sometimes the script may prevent the browser from optimizing the
reflows, and cause it to flush the queue and perform all batched
changes. This happens when you request style information, such as
offsetTop, offsetLeft, offsetWidth, offsetHeight
scrollTop/Left/Width/Height
clientTop/Left/Width/Height
getComputedStyle(), or currentStyle in IE
All of these above are essentially requesting style information about
a node, and any time you do it, the browser has to give you the most
up-to-date value. In order to do so, it needs to apply all scheduled
changes, flush the queue, bite the bullet and do the reflow.
There is nothing in Javascript related to concurrency that is anything but de facto. JS simply does not define a concurrency model. Everything is happy accident or years of consensus.
That said, if your function does not make any calls to weird things like XMLHttpRequest or "alert" or something like that, you can basically treat it as single-threaded with no interrupts.
I need to be able to benchmark a particular build of a webkit-based browser and am measuring the length of time it takes to do certain stuff like DOM manipulation, memory limits etc.
I have a test below which records the length of time it takes to simultaneously load in 10 fairly heavy PNG graphics. In code, I need to be able to time how long it takes for the load to finish. I have tried setting
the onLoad function on the dynamic Image object to produce a time in ms. However, as shown in the cap below it is giving an inaccurate reading because the reading it gives is a tiny amount due to it only recording the data transfer part of the load and then there is a considerable (3000+ms) delay for when the images are viewable - looped in blue, this is the browser reflow cycle.
Is there some event in webkit I can use to record when the browser has finished a reflow so that I can benchmark this? I have to be able to record the time in milliseconds in code because the build of webkit I am testing has no developer tools. I am able to observe the difference in Chrome ok but the performance between the two builds differs drastically and I need to be able to quantify it accurately for comparison.
If you are using jQuery, you could try recording the time between document ready and window load, that would give you an approximation.
(function(){
var start, end;
$(document).ready(function(){
start = new Date()
});
$(window).load(function(){
end = new Date();
console.log(end.getTime() - start.getTime());
});
}());
Edit:
Have you taken a look at the Browserscope reflow timer? Basically it checks to see how long it takes for the browser to return control to the JavaScript engine after changes to the dom. According the the page it should work in any browser, although I haven't tested it personally. Perhaps you could adapt the code run during the tests to time the reflow in your page.
Also, you might want to have a look at CSS Stress Test. The bookmarklet is really great for page performance testing. http://andy.edinborough.org/CSS-Stress-Testing-and-Performance-Profiling
How about setting the PNG as a div background-image and running the stress test, it should enable/disable the image multiple times with timings.
Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.
I have a tree that gets populated through the web service - this part is super fast, the part that's a bit slower is populating the tree...I have a gif rotating image that rotates while the service is loading. Since I use ajaxStop and ajaxStart trigger, the gif stops rotating after ajax request has completed, which is correct. However, because the loading takes a split second, the gif freezes for that split second which looks unprofessional.
How do I make the gif rotate until the tree is finished loading?
Browsers give a low priority to image refreshing, so in the time your code is manipulating/inserting in the DOM, the browser is busy with that and doesn't have time to repaint the image.
There's not a whole lot you can do, besides optimizing your code so that the processing you're doing with the ajax data is less intensive, or for example if you're getting a list of 1000 items, insert them in the page in intervals of 50, with a small delay between each, so the browser has time to repaint.
YMMV, maybe it looks great as is in Chrome, but freezes for 5 seconds in IE.
Browsers won't typically update images whilst JavaScript code is executing. If you need the spinner to continue animating during DOM population, your population function will have to give up control back to the browser several times a second to let it update the image, typically by setting a timeout (with no delay, or a very short delay) that calls back into the population process, and then returning.
Unfortunately this will usually make your population function much more complicated, as you have to keep track of how far you've got in the population process in variables instead of relying on loops and conditional structures to remember where you are. Also, it will be slightly slower to run, depending on how you're populating the page structures, and if there are click or other events that your application might get delivered half-way through population you can end up with nasty race conditions.
IMO it would probably be better to stop the spinner and then update the DOM. You'll still get the pause, but without the spinner stuttering to a halt it won't be as noticeable. To give the browser a chance to update the spinner after ajaxStop has changed its src, use a zero-delay-timeout-continuation in your AJAX callback function so that on completion the browser gets a chance to display the altered spinner before going into the lengthy population code.
Making this population step faster is definitely worthwhile, if a slightly different topic. (Appending lots of DOM elements one after the other is inherently slow as each operation has to spend more time trudging through list operations. Appending lots of DOM elements all at once via a DocumentFragment is fast, but getting all those DOM elements into the fragment in the first place might not be. Parsing the entire innerHTML at once is generally fast, but generating HTML without injection security holes is an annoyance; serialising and re-parsing via innerHTML+= is slower and totally awful. IE/HTML5 insertAdjacentHTML is fast, but needs fallback implementation for many browsers: ideally fast Range manipulation, falling back to slow node-by-node DOM calls for browsers with no Range. Don't expect jQuery's append to do this for you; it is as slow as node-by-node DOM operations because that's exactly what it's doing.)
While manipulating the DOM on the fly is really tedious for a lot of browser (especially older one) you might want to optimize what you are doing there as much as you can.
Also, another good idea would be to make sure you are running jQuery 1.4 which is a lot faster for doing such operations.
You can see useful benchmark(1.3 vs 1.4) done by the jQuery team that illustrates that here:
http://jquery14.com/day-01/jquery-14