Web page div empty vs hide - javascript

When having 2 <div> elements both with fair amounts of graphic content and only one is shown at a time, is it better to hide then one not shown, or empty it and re-insert the html tags again?
When you hide, does everything stay in memory?

The answer is most probably: it depends.
As humans, we have some orientation around what's required in terms of performance for different operation, but in this case when you are not sure, best practice is estimate and benchmark. The idea is to understand what's the compromise in every method:
First, and easier, is benchmark the memory impact of your hidden div. Note that a hidden <div> is kept in memory, but is not rendered in page, thus having a smaller memory footprint than a rendered element. Measuring this can be easily done, even simply using chrome task manager. Check multiple changing between the two, and measure the memory footprint when using each method. Is is really as major as you expected?
The second, a little more complicated: measure the impact to reloading and re-rendering on your client's system, and on the user experience of your app. Best is to use a WEAK machine, and maybe even a slow connection. Measure the delay created if any, if not using code and reporting, at least by your feel, and try to measure cpu peaking and process slowdown if any. Switch between the divs multiple times, in slow and rapid manners. Does it still feel slick?
In this case, I tend to guess the memory footprint is much smaller than you assume, but that's just my experience. I believe hiding and showing will require less effort than emptying and reloading.
That said, I'm certain after doing both, It'll probably become very clear to you what's the correct method for you. Hiding if the memory footprint is small, reloading if it's so large as to be worth the slow of reloading. Only you can measure and figure where's the line.
Side note: When hiding, best practice is to use display:none;. This ensures elements are removed from the render tree, which provides better performance than opacity: 0; / visibility: hidden;, although if you need one of those for specific functionality use them. If you don't need the functionality, then use display: none;. Also note that jQuery's .hide() uses display: none;, so it's best practice. From jQuery .hide() documentation:
This is roughly equivalent to calling .css( "display", "none" )
Sources:
How Web Browsers Work: Render Tree Construction
jQuery documentation: .hide()
jQuery documentation: .empty()

Related

Preferred way of swapping big trees (e.g. pages in SPAs): `display: none`, `replaceChild()`, etc

When developing complex web applications, like SPAs, we are faced with the problem of displaying/enabling/keeping updated different parts of the application at different times.
For instance, as a simple example, consider swapping the current page/section/activity of the application in a SPA.
Assuming that these trees are non-trivial (they are entire pages, they may contain CSS animations, event handlers, some non-static content, etc.) and that we care about performance, what is, nowadays, the preferred way of swapping them in and out? What approach modern frameworks choose? Why?
In particular, several metrics are important to consider:
Overall performance/memory usage/etc. (i.e. when not swapping pages)
Actual time required to switch pages
Note that in order to be as quick as possible, we want to keep the content/pages pre-loaded somehow (i.e. no network requests are discussed here).
For context, here are several ways people have used over the years:
Have some string templates in JS. When pages are switched, replace entirely the contents with innerHTML at once (for performance), reattaching handlers and start updating the non-static values as needed at that moment, etc.
Keeping everything in the DOM (i.e. all elements of all possible pages), but have inactive pages with display: none. Then, swap the current page simply switching display property in the page going out and the one going in. Also, do not update non-visible pages to avoid performance penalties.
Keeping the trees in JS (i.e. keep a reference to each top-most Node of each page in JS), and swap them in and out of the DOM with e.g. replaceChild().

Why does the startup time grow when we group forced reflows using fastdom?

We're building a web framework and have identified forced reflows as one of the main performance bottlenecks of our applications. From web research, we learned that actions that trigger forced reflows should be grouped into read- and write-batches to minimize the number of reads that actually are costly (because something changed). [Our main goal is to get rid of forced reflows altogether, but we doubt that we can remove every single one – and from what we learned it is the first one after a change that is expensive.]
In a proof of concept, we implemented this batch approach using the fastdom library for a single application start. Surprisingly, instead of the performance increase, we hoped for, the startup performance decreased from (in median) 372ms to 391ms time to interactive. In the performance trace, we see that the large recalculate style/layout cycle now directly comes from revealing the changed HTML, and that the js-triggered forced reflows (in the animation frame after that reveal) are actually fast. However, the overall performance was better with the forced reflows happening before the reveal.
performance trace before implementing fastdom:
performance trace after implementing fastdom:
Can someone explain why we observe this behavior? Why does it seem that the community-approved approach works in the opposite direction in our case?
Thanks for all hints in advance!

Async loading of Typekit :: is it worth it, or better not to use it at all?

Trying to get page-load time down.
I followed the third example outlined here to asynchronously load the TypeKit javascript.
To make it work you have to add a .wf-loading #some-element {visibility: hidden;} to each element that uses the font, and after either 1) it loads or 2) after a set time (1 sec), the font becomes visible.
The thing is, the CSS I'm working with has the font assigned to about 200 elements, so thats 200 elements of .wf-loading{ } (note: I did not write this CSS).
I feel this would slow the load time down more than just letting it load regularly, DOM traversing that much stuff. If this is the case, I will just axe Typekit altogether and go with a regular font.
Are there any tools I can use to run performance tests on this kind of stuff? Or has anyone tested these things out?
You're not actually modifying more than a single DOM element (the root ) with this approach. This means that our modern browsers will rely on their super fast CSS engines, so the number of elements involved will have no noticeable affect on page load.
As far as page load and flicker, network latency is usually an order of magnitude worse than DOM manipulation. There will always be some flicker on the very first (unprimed) page load while the browser waits for the font to download. Just make sure your font being cached for reuse, and try to keep it's file size as small as possible.
I went down this path a few years ago with Cufon. In the end, I chose the simplest path with acceptable performance and stopped there. It's easy to get caught up in optimizing page loads, but there's probably more promising areas for improvement – features, bugs, refactoring, etc.
The problem is, as they mention in the blog, the rare occasions (but it definitely does happen - certainly more than once for me) when the Typekit CDN fails completely and users just see a blank page. This is when you'll wish you'd used async loading.

Opinion on adding pre-styled class to a dynamically created element or adding stying there & then

In our web application, there are a lot of effects flying about. We are switching to creating as many elements dynamically, read: when needed, as possible.
Im curious about this technique I love using on elements, that is, creating a class with css styling in the style sheet, and then when the element is created with js, I merely add the class to the element to give it the styling I in the css file.
Is this really the best approach or would giving the styling in javascript (element.style = *) be better?
Note, memory is very important in our case so whichever uses less memory & rendering load would be better.
This depends on the case. If you have a set style for an element, which you just switch on and off, then adding/removing a class is the way to go. However, if you are consistently changing the style (i.e. in an animation) then modifying the style is better. In terms of memory, adding/removing classes would probably be more memory efficient.
Putting it in a separate stylesheet is usually considered best practice in terms of maintainability, separation of content from logic, all that good stuff.
But memory usage and render times, which you mention specifically as being very important to you, might be another matter.
You can use the web developer tools built into most modern browsers (e.g., Chrome Developer Tools) to try both approaches and profile for memory and render times. In Chrome Dev Tools, select Timeline, hit the record button at the bottom, load your page, do a few things on the page if that's relevant to you, stop the recording, and examine the memory usage and load time right there.
If your concern is animations, you may want to install Chrome Canary which has a third option (aside from Timelines and Memory) under Timelines that is something like Frames.

How to make GIF rotate when the tree is loading in Javascript

I have a tree that gets populated through the web service - this part is super fast, the part that's a bit slower is populating the tree...I have a gif rotating image that rotates while the service is loading. Since I use ajaxStop and ajaxStart trigger, the gif stops rotating after ajax request has completed, which is correct. However, because the loading takes a split second, the gif freezes for that split second which looks unprofessional.
How do I make the gif rotate until the tree is finished loading?
Browsers give a low priority to image refreshing, so in the time your code is manipulating/inserting in the DOM, the browser is busy with that and doesn't have time to repaint the image.
There's not a whole lot you can do, besides optimizing your code so that the processing you're doing with the ajax data is less intensive, or for example if you're getting a list of 1000 items, insert them in the page in intervals of 50, with a small delay between each, so the browser has time to repaint.
YMMV, maybe it looks great as is in Chrome, but freezes for 5 seconds in IE.
Browsers won't typically update images whilst JavaScript code is executing. If you need the spinner to continue animating during DOM population, your population function will have to give up control back to the browser several times a second to let it update the image, typically by setting a timeout (with no delay, or a very short delay) that calls back into the population process, and then returning.
Unfortunately this will usually make your population function much more complicated, as you have to keep track of how far you've got in the population process in variables instead of relying on loops and conditional structures to remember where you are. Also, it will be slightly slower to run, depending on how you're populating the page structures, and if there are click or other events that your application might get delivered half-way through population you can end up with nasty race conditions.
IMO it would probably be better to stop the spinner and then update the DOM. You'll still get the pause, but without the spinner stuttering to a halt it won't be as noticeable. To give the browser a chance to update the spinner after ajaxStop has changed its src, use a zero-delay-timeout-continuation in your AJAX callback function so that on completion the browser gets a chance to display the altered spinner before going into the lengthy population code.
Making this population step faster is definitely worthwhile, if a slightly different topic. (Appending lots of DOM elements one after the other is inherently slow as each operation has to spend more time trudging through list operations. Appending lots of DOM elements all at once via a DocumentFragment is fast, but getting all those DOM elements into the fragment in the first place might not be. Parsing the entire innerHTML at once is generally fast, but generating HTML without injection security holes is an annoyance; serialising and re-parsing via innerHTML+= is slower and totally awful. IE/HTML5 insertAdjacentHTML is fast, but needs fallback implementation for many browsers: ideally fast Range manipulation, falling back to slow node-by-node DOM calls for browsers with no Range. Don't expect jQuery's append to do this for you; it is as slow as node-by-node DOM operations because that's exactly what it's doing.)
While manipulating the DOM on the fly is really tedious for a lot of browser (especially older one) you might want to optimize what you are doing there as much as you can.
Also, another good idea would be to make sure you are running jQuery 1.4 which is a lot faster for doing such operations.
You can see useful benchmark(1.3 vs 1.4) done by the jQuery team that illustrates that here:
http://jquery14.com/day-01/jquery-14

Categories