Recalculate Style: why so stuttering? - javascript

Let's say we have a code that injects series of similar elements into the DOM. Something like this:
var COUNT = 10000,
elements = Object.keys(Array(COUNT).join('|').split('|'));
var d = document,
root = d.getElementById('root');
function inject() {
var count = COUNT,
ul = d.createElement('ul'),
liTmpl = d.createElement('li'),
liEl = null;
console.time('Processing elements');
while (count--) {
liEl = liTmpl.cloneNode(false);
liEl.textContent = elements[count];
ul.appendChild(liEl);
}
console.timeEnd('Processing elements');
console.time('Appending into DOM');
root.appendChild(ul);
console.timeEnd('Appending into DOM');
};
d.getElementById('inject').addEventListener('click', inject);
Demo.
When this snippet is run in Firefox (25.0), the time between calling 'inject' and actually seeing its results more-o-less corresponds to what is logged by time/timeEnd. For 1000 elements, about 4 ms; for 10000, about 40 and so on. Quite normal, ain't it?
It's very not so, however, with Chrome (30.0 and Canary 32.0 are tested). While the reported time for processing and appending is actually less than Firefox's, rendering of these elements takes a LOT more.
Puzzled, I've checked the Chrome's profiler for different scenarios - and it turned out the bottleneck is in Recalculate Style action. It takes 2-3 seconds for 10000 nodes, 8 seconds for 20000 nodes, and whopping 17 seconds for 30000 nodes.
Now the real question is: has anyone been in the same situation, are there any workarounds?
One possible way we've thought about is limiting the visibility of these nodes in a sort of lazy load ('a sort of', because it's more about 'lazy showing': the elements will already be in place, only their visibility will be limited). It's confirmed that 'Recalculate Style' is triggered only when element is about to become visible (which makes sense, actually).

Looks like the trouble is with the li elements that have display:list-item
If instead of ul/li you use div elements it works pretty fast in chrome..
Also creating a css rule of li{display:block;} fixes the delay.
And manually adding the list-item shows the delay even if the elements are already rendered in the DOM (they have to be re-rendered ofcourse)
See demo at http://jsfiddle.net/6D7sM/1/
(so it seems that chrome is slow in rendering display:list-item elements)
There is also a relevant bug submission to chrome http://code.google.com/p/chromium/issues/detail?id=71305 which has been merged into http://code.google.com/p/chromium/issues/detail?id=%2094248 (looks like in earlier versions it was crashing chrome, but it has been fixed. the crashing, not the speed)

Related

Whether dom-manipulation requires a refresh?

I have a doubt about DOM manipulation.
Whether dom-manipulation requires a refresh ? Or when did DOM manipulation requires refreshing? I see some of the sites keep loading while updating some of their parts.
Also,
How does react.js help to avoid this kind of problems while dealing with the front end development?
I have a doubt about DOM manipulation. Whether dom-manipulation requires a refresh ?
It depends on what you mean by "refresh." I can think of at least three possible things you could mean:
"Refresh" like pressing F5 on a page or hitting the reload button
"Refresh" like recalculate the positions of the elements; this is more correctly called "reflow"
"Refresh" as in repaint the elements on the screen ("repaint")
Reload
No, it doesn't, and in fact if you reload the page, the changes you've made to the DOM with your browser-based code will get wiped out. Here's a really simple example of DOM manipulation; no reloading is done until the end, and when it's done you can see that the changes made previously are wiped out (and then, since the code is also reloaded, we're starting from scratch, so it all starts over):
for (let i = 0; i < 5; ++i) {
delayedAdd(i);
}
function delayedAdd(value) {
setTimeout(() => {
// DOM manipulation
const p = document.createElement("p");
p.textContent = value;
document.body.appendChild(p);
if (value === 4) {
// Refresh after the last one -- wipes out what we've done
setTimeout(() => {
location.reload();
}, 800);
}
}, value * 800);
}
Reflow
Some DOM manipulations trigger reflow, yes; or more accurately, doing some things (which might be just getting, say, an element's current clientLeft property value) triggers reflow, and reflow may involve recalculating layout, which can be expensive. Code can cause reflow repeatedly, including causing the layout to be recalculated repeatedly, doing a single series of manipulations. This list from Paul Irish claims to be "What forces layout/reflow. The comprehensive list." and Paul Irish is a deeply-experienced person in this realm and well-regarded in the industry, so the list is likely to be accurate (though no one is perfect, and these things can sometimes change over time). The page also has some guidance on how to avoid reflow (or more accurately, how to avoid layout recalculation).
Here's an example of code causing unnecessary layout recalculation:
const elementLefts = [];
// This loop triggers layout recalculation 10 times
for (let n = 0; n < 10; ++n) {
const span = document.createElement("span");
span.textContent = n;
document.body.appendChild(span);
// Triggers layout recalcuation every time:
elementLefts.push(span.offsetLeft);
}
console.log(elementLefts.join(", "));
We can get that same information after the loop and only trigger a single layout recalculation:
const spans = [];
// This loop triggers layout recalculation 10 times
for (let n = 0; n < 10; ++n) {
const span = document.createElement("span");
span.textContent = n;
document.body.appendChild(span);
spans.push(span);
}
// Triggers one layout recalcuation, because nothing has changed between
// times we get `offsetLeft` from an element
const elementLefts = spans.map(span => span.offsetLeft);
console.log(elementLefts.join(", "));
Repaint
The browser repaints the screen all the time, typically 60 times per second (or even more if the system it's running on has a higher refresh rate) unless it's blocked by a long-running task on the UI thread (which is shared with the main JavaScript code). DOM manipulations don't cause this repaint (though changing the DOM may change what's painted, which might in turn prevent the browser reusing some painting information it had from last time).
Also, How does react.js help to avoid this kind of problems while dealing with the front end development?
They can help with reflows by minimizing layout recalculations by using the knowledge of what causes recalcs and avoiding doing the things that cause them in loops, etc. That said, they're not a magic bullet, and like everything else they have cons as well as pros.

My loop in JavaScript gets progressively slower while updating the page's DOM

My Electron app contains a page which runs a loop within a JavaScript file. The loop updates every 600 milliseconds using setTimeout. The code works perfectly fine while adding values to a text area within the pages DOM, but slows down progressively by about 2-3 milliseconds every 5 seconds when I use JavaScript to update the text area scroll height. Using Chrome Dev Tools doesn't indicate a memory leak on first glance.
Does anybody know why that specific change to the DOM worsens performance progressively but the other change doesn't and how I could possible rectify the issue?
Example of the code:
const log = document.getElementById('log-text');
function start() {
/// do stuff
core()
}
function core() {
/// do stuff
log.value += "Looping..."; /// this does not appear to affect performance
log.scrollTop = log.scrollHeight /// adding this line progressively worsens performance
// do stuff
coreTimer = setTimeout(core, 600);
}

Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive.
Here's a breakdown of what goes on in my application:
User clicks a button
Server retrieves a list of images in the form of a data-table
Each row in the table contains 8 data-cells that in turn each contain one hyperlink
Each request by the user can contain up to 50 rows (I can change this number if need be)
That means the table contains upwards of 800 individual DOM elements
My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete.
I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.
jQuery empty() is taking a long time on your table because it does a truly monumental amount of work with the contents of the emptied element in the interest of preventing memory leaks. If you can live with that risk, you can skip the logic involved and just do the part that gets rid of the table contents like so:
while ( table.firstChild )
table.removeChild( table.firstChild );
or
table.children().remove();
I recently had very large data-tables that would eat up 15 seconds to a minute of processing when making changes due to all the DOM manipulation being performed. I got it down to <1 second in all browsers but IE (it takes 5-10 seconds in IE8).
The largest speed gain I found was to remove the parent element I was working with from the DOM, performing my changes to it, then reinserting it back into the DOM (in my case the tbody).
Here you can see the two relevant lines of code which gave me huge performance increases (using Mootools, but can be ported to jQuery).
update_table : function(rows) {
var self = this;
this.body = this.body.dispose(); //<------REMOVED HERE
rows.each(function(row) {
var active = row.retrieve('active');
self.options.data_classes.each(function(hide, name) {
if (row.retrieve(name) == true && hide == true) {
active = false;
}
});
row.setStyle('display', (active ? '' : 'none'));
row.store('active', active);
row.inject(self.body); //<--------CHANGES TO TBODY DONE HERE
})
this.body.inject(this.table); //<-----RE-INSERTED HERE
this.rows = rows;
this.zebra();
this.cells = this._update_cells();
this.fireEvent('update');
},
Do you have to repopulate all at once, or can you do it by chunks on a setTimeout()? I realize it'll probably take even longer in chunks, but it's worth a lot for the user to see something happening rather than an apparent lockup.
What worked for me is $("#myTable").detach(). I had to clear a table that has 1000s of rows. I tried $("#myTable").children().remove(). It was an improvement over $("myTable").empty(), but still very slow. $("#myTable").detach() takes 1 second or less.
Works fine in FF and Chrome. IE is still slow.

Improving Efficiency in jQuery function

The while statement in this function runs too slow (prevents page load for 4-5 seconds) in IE/firefox, but fast in safari...
It's measuring pixel width of text on a page and truncating until text reaches ideal width:
function constrain(text, ideal_width){
$('.temp_item').html(text);
var item_width = $('span.temp_item').width();
var ideal = parseInt(ideal_width);
var smaller_text = text;
var original = text.length;
while (item_width > ideal) {
smaller_text = smaller_text.substr(0, (smaller_text.length-1));
$('.temp_item').html(smaller_text);
item_width = $('span.temp_item').width();
}
var final_length = smaller_text.length;
if (final_length != original) {
return (smaller_text + '…');
} else {
return text;
}
}
Any way to improve performance? How would I convert this to a bubble-sort function?
Thanks!
move the calls to $() outside of the loop, and store its result in a temporary variable. Running that function is going to be the slowest thing in your code, aside from the call to .html().
They work very very hard on making the selector engines in libraries fast, but it's still dog slow compared to normal javascript operations (like looking up a variable in the local scope) because it has to interact with the dom. Especially if you're using a class selector like that, jquery has to loop through basically every element in the document looking at each class attribute and running a regex on it. Every go round the loop! Get as much of that stuff out of your tight loops as you can. Webkit runs it fast because it has .getElementsByClassName while the other browsers don't. (yet).
Instead of removing one character at time until you find the ideal width, you could use a binary search.
I see that the problem is that you are constantly modifying the DOM in the loop, by setting the html of the temp_item, and then re reading the width.
I don't know the context of your problem, but trying to adjust the layout by measuring the rendered elements is not a good practice from my point of view.
Maybe you could approach the problem from a different angle. Truncating to a fixed width is common.
Other possibility (hack?) if dont have choices, could be to use the overflow css property of the container element and put the … in other element next to the text. Though i recommend you to rethink the need of solving the problem the way you are intending.
Hugo
Other than the suggestion by Breton, another possibility to speed up your algorithm would be to use a binary search on the text length. Currently you are decrementing the length by one character at a time - this is O(N) in the length of the string. Instead, use a search which will be O(log(N)).
Roughly speaking, something like this:
function constrain(text, ideal_width){
...
var temp_item = $('.temp_item');
var span_temp_item = $('span.temp_item');
var text_len_lower = 0;
var text_len_higher = smaller_text.length;
while (true) {
if (item_width > ideal)
{
// make smaller to the mean of "lower" and this
text_len_higher = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_lower)/2));
}
else
{
if (smaller_text.length>=text_len_higher) break;
// make larger to the mean of "higher" and this
text_len_lower = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_higher)/2));
}
temp_item.html(smaller_text);
item_width = span_temp_item.width();
}
...
}
One thing to note is that each time you add something to the DOM, or change the html in a node, the page has to redraw itself, which is an expensive operation. Moving any HTML updates outside of a loop might help speed things up quite a bit.
As other have mentioned, you could move the calls to $() to outside the loop. You can create a reference to the element, then just call the methods on it within the loop as 1800 INFORMATION mentioned.
If you use Firefox with the Firebug plugin, there's a great way of profiling the code to see what's taking the longest time. Just click profile under the first tab, do your action, then click profile again. It'll show a table with the time it took for each part of your code. Chances are you'll see a lot of things in the list that are in your js framework library; but you can isolate that as well with a little trial and error.

clientWidth Performance in IE8

I have some legacy javascript that freezes the tfoot/thead of a table and lets the body scroll, it works fine except in IE8 its very slow.
I traced the problem to reading the clientWidth property of a cell in the tfoot/thead... in ie6/7 and FireFox 1.5-3 it takes around 3ms to read the clientWidth property... in IE8 it takes over 200ms and longer when the number of cells in the table is increased.
Is this a known bug ? is there any work around or solution ?
I've solved this problem if you are still interested. The solution is quite complex. Basically, you need to attach a simple HTC to the element and cache its clientWidth/Height.
The simple HTC looks like this:
<component lightweight="true">
<script>
window.clientWidth2[uniqueID]=clientWidth;
window.clientHeight2[uniqueID]=clientHeight;
</script>
</component>
You need to attach the HTC using CSS:
.my-table td {behavior: url(simple.htc);}
Remember that you only need to attach the behavior for IE8!
You then use some JavaScript to create getters for the cached values:
var WIDTH = "clientWidth",
HEIGHT = "clientHeight";
if (8 == document.documentMode) {
window.clientWidth2 = {};
Object.defineProperty(Element.prototype, "clientWidth2", {
get: function() {
return window.clientWidth2[this.uniqueID] || this.clientWidth;
}
});
window.clientHeight2 = {};
Object.defineProperty(Element.prototype, "clientHeight2", {
get: function() {
return window.clientHeight2[this.uniqueID] || this.clientHeight;
}
});
WIDTH = "clientWidth2";
HEIGHT = "clientHeight2";
}
Notice that I created the constants WIDTH/HEIGHT. You should use these to get the width/height of your elements:
var width = element[WIDTH];
It's complicated but it works. I had the same problem as you, accessing clientWidth was incredibly slow. This solves the problem very well. It is still not as fast IE7 but it is back to being usable again.
I was unable to find any documentation that this is a known bug. To improve performance, why not cache the clientWidth property and update the cache periodically? I.E if you code was:
var someValue = someElement.clientWidth + somethingElse;
Change that to:
// Note the following 3 lines use prototype
// To do this without prototype, create the function,
// create a closure out of it, and have the function
// repeatedly call itself using setTimeout() with a timeout of 1000
// milliseconds (or more/less depending on performance you need)
var updateCache = function() {
this. clientWidthCache = $('someElement').clientWidth;
};
new PeriodicalExecuter(updateCache.bind(this),1);
var someValue = this.clientWidthCache + somethingElse
Your problem may be related to something else (and not only the clientwidth call): are your updating/resizing anyhting in your DOM while calling this function?
Your browser could be busy doing reflow on IE8, thus making clientwidth slower?
IE 8 has the ability to switch between IE versions and also there is a compatibility mode.
Have you tried switching to Compatibility Mode? Does that make any difference?
I though I had noticed a slow performance also when reading the width properties. And there may very well be.
However, I discovered that the main impact to performance in our app was that the function which was attached to the window's on resize event was itself somehow causing another resize which caused a cascading effect, though not an infinite loop. I realized this when i saw the call count for the function was orders of magnitude larger in IE8 than in IE7 (love the IE Developer Tool). I think the reason is that some activities on elements, like setting element widths perhaps, now cause a reflow in IE8 that did not do so in IE7.
I fixed it by setting the window's resize event to: resize="return myfunction();" instead of just resize="myfunction();" and making sure myfunction returned false;
I realize the original question is several months old but I figured I'd post my findings in case someone else can benefit.

Categories