I'm currently updating the content of alot of divs using:
https://jsfiddle.net/foreyez/ctpuaw5v/
for (var i = 0; i < 400; i++)
{
document.getElementById('box' + i).innerHTML = 'a'; // each element can have a different value, a is arbitrary
}
(400 is arbitrary, it could be alot more)
I'm wondering as far as browser reflow, would it do a reflow on each innerHTML set, and if so, maybe there's a way for me to update all divs at once with only one reflow (or even NO reflow) for performance reasons, or maybe use something faster than innerHTML.
If you can replace 400 calls of elem.innerHTML = ...; by single call of
container.innerHTML = cummulativeHtml; it will be faster - only one DOM mutation transaction instead of 400.
If you use jquery you can just do something like:
$('.box').html('whatever');
Related
Originally I had a huge div with many child elements that was display: none and then I would simply set it to display: '' and the entire div would be visible. This created some noticable lag. I want to throttle it by displaying the elements one by one with a timeout but the function I created causes strange behavior. It actually works fine if you remove the setTimeout but without setTimeout there is still the same lag.
function throttleDisplay(page) {
page.style.display = '';
var children = page.children;
if (!children.length) return;
for (var i = 0; i < children.length; i++) {
var child = children[i];
setTimeout(function() {
throttleDisplay(child);
}, 100);
}
}
Several problems.
Revealing the top-level <div> all-at-once by setting display:block (or display:'') will trigger just one page re-flow and re-paint, and will therefore create less "lag" than recursively revealing children, which will thrash your layout with exponential re-flows and re-paints.
setTimeout (and therefore its callback) is called for each child in the for loop (at one recursion tier) more or less simultaneously, so this throttles the reveal of descendant elements, not sibling elements.
Unless every element in the tree begins with display:none, setting the top-level element to display:'' will reveal the tree all-at-once, anyway.
Are you certain that revealing the top-level <div> is the cause of your lag? A code sample might help the community find the source of your problem. A first suggestion would be to wrap the code that changes display inside a requestAnimationFrame. (MDN on rAF)
Note 1A: I say "exponential" because you are revealing each child separately versus one container element, but of course the number of operations is linear with respect to the total number of descendants, ignoring their relative "container"/"contained" status.
Note 1B: It is not necessarily the case that this code will "thrash" your layout; you are performing a sequence of "writes" to the layout which will probably be automatically batched at the end of a frame by a modern browser, provided all the function calls can be processed within the space of a frame (~17ms), and we are speaking only about the non-throttled sibling reveals. The asynchronous throttling would allow "reads" from other parts of your code, forcing a re-flow, but since the delay is already the length of 5 frames, this is irrelevant. The point is that this code will not reduce "lag" of any kind.
Thanks to this-vidor for explaining some of the problems with the function I had. I don't know what exactly was causing the very strange behavior in my particular situation because I tried to reproduce it with fake data on jsbin and did not have the same problems. I deciding on building a custom function for my particular situation it looks like this
Messages: function (page) {
var messageCount = 0;
var curThrottle = 0;
page.style.display = '';
var children = page.children;
var lastChild = children[children.length - 1];
var lastChildsChildren = lastChild.children;
for (var i = 0; i < lastChildsChildren.length; i++) {
var child = lastChildsChildren[i];
child.style.display = '';
var messages = child.children[child.children.length - 1].children;
for (var j = 0; j < messages.length; j++) {
if (++messageCount%40 === 0) curThrottle += 30;
var message = messages[j];
(function (message) {
setTimeout(function() {
message.style.display = '';
}, curThrottle);
})(message);
}
}
}
I have a page with JavaScript which takes a very long time to run. I have profiled the code using Firefox and following is the output.
As you can see I have moved the time consuming lines to a method _doStuff, which seems to do a lot of Graphic related things. Following is the content of the _doStuff method.
_doStuff:function(tds,colHeaderTds,mainAreaTds ){
for (i = 1; i < tds.length; i++) {
if (colHeaderTds[i].offsetWidth <= tds[i].offsetWidth) {
colHeaderTds[i].style.width = tds[i].offsetWidth + "px";
}
mainAreaTds[i].style.width = colHeaderTds[i].offsetWidth + "px";
}
},
I am assuming that the time consuming Graphics sections are due to setting the widths of the elements. Is this observation correct? And how should I go about optimizing the code so that it would take less time to load the page?
Every iteration of your loop JS changes your DOM tree and forces browser to repaint it.
The good practice is to make a copy of your element, modify it in the loop, and after loop change the .innerHTML of the former element.
More reading about repaints on the topic here
I have a vanilla JavaScript function to test appending large numbers of elements to the DOM:
var start = new Date().getTime();
var blah;
var div = document.getElementById("me");
for (i = 0; i < 5000; ++i) {
div.innerHTML += "<div>" + i + "</div>";//Simply add to div.
}
var end = new Date().getTime();
var time = end - start;
alert('Execution time: ' + time);
Results:
Chrome IE10
------ -----
Vanilla 39 130 (seconds)
JQuery:
for (i = 0; i < 5000; ++i) {
$("#me").append("<div>" + i + "</div>");//Now using append instead.
}
Results
Chrome IE10
------ -----
Vanilla 39000 130,000 (milliseconds)
JQuery 260 1300 (milliseconds)
NB: It didn't seem to have any impact on performance if I used the $("#me") selector or passed in $(div)
Vanilla with AppendChild:
for (i = 0; i < 5000; ++i) {
var el = document.createElement("div");//Now create an element and append it.
el.innerHTML = i;
div.appendChild(el);
}
Chrome IE10
------ -----
Vanilla 39000 130,000 (ms)
JQuery 260 1300 (ms)
AppendChild 30 240 (ms)
To my huge surprise this was by far the fastest, most performant. On Chrome it takes a whopping 30ms or so, and on IE it takes around 240ms.
You can play with all the variations here: Fiddle
I know there could be many other variations to test, but what is jQuery doing behind the scenes to make it's .append() so much faster than native JS innerHTML += and why is creating a new element and appending it even faster?
If you do things right, you can pretty much double your "best" result.
Native DOM methods are always faster than their jQuery alternatives. However, .innerHTML is not ideal.
When you use .innerHTML += ..., here's what happens:
Build an HTML representation of the entire DOM that current exists
Append your new string to it
Parse the result and create a whole new DOM tree from it
Put the new stuff in place of the old stuff
The native methods are significantly less work ;)
It should also be noted that innerHTML += ... completely nukes the DOM, meaning any references to the old elements are lost and in particular event handlers are not kept (unless you used inline event handlers, which you shouldn't be)
Behind the scenes, jQuery is using document fragments, which perform much better than straight manipulation of the document. John Resig discussed document fragments' superior performance in 2008, which should give you a solid explanation about what jQuery is doing and why.
It seems to me that it would be much more efficient if you calculated everything you wished to append beforehand, then append that - minimize the DOM manipulation required. eg:
var toAppend = "";
for (i = 0; i < 5000; ++i) {
toAppend += "<div>" + i + "</div>";
}
div.append(toAppend)
If you're wanting them to be nested, you could make it recursive, or come up with some other solution. either way, I believe string manipulation will always be faster than DOM manipulation
I have 2000 rows of data as follows
<div class="rr cf">
<span>VLKN DR EXP</span>
<span>01046</span>
<span>VELANKANNI</span>
<span>20:30</span>
<span>DADAR</span>
<span>10:00</span>
</div>
On a button click I am checking for text within them and updating the display of each row to block or none. The code that does this is
$('.rr').each(function(){
this.style.display="block";
});
var nodes = $(".rr");
for(var i=0;i < nodes.length; i++) {
// if data found
nodes.get(i).style.display="block";
// else
nodes.get(i).style.display="none";
}
This seems to be possibly very slow. I get chrome alert box as to kill the page.
Any ideas? what optimization can I do here?
Local Variables and Loops
Another simple way to improve the performance of a loop is to
decrement the iterator toward 0 rather than incrementing toward the
total length. Making this simple change can result in savings of up to
50% off the original execution time, depending on the complexity of
each iteration.
Taken from: http://oreilly.com/server-administration/excerpts/even-faster-websites/writing-efficient-javascript.html
Try saving the nodes.length as a local variable so that the loop doesn't have to compute it each time.
Also, you can store nodes.get(i) into a local variable to save some time if you are accessing that data a lot.
If the order isn't important, consider decrementing your for loop towards 0.
jQuery's each() loop is a bit slower than looping through the set yourself as well. You can see here that there is a clear difference.
Very simple example
You'll see that in my example, I've condensed the loop into a while loop:
var nodes = $(".rr span");
var i = nodes.length;
while(i--){
if(i%2 === 0){
nodes.get(i).style.color = "blue";}
}
Notice that the while loop decrements i through each iteration. This way when i = 0, the loop will exit, because while(0) evaluates to false.
"Chunking" the Array
The chunk() function is designed to process an array in small chunks
(hence the name), and accepts three arguments: a “to do” list of
items, the function to process each item, and an optional context
variable for setting the value of this within the process() function.
A timer is used to delay the processing of each item (100ms in this
case, but feel free to alter for your specific use). Each time
through, the first item in the array is removed and passed to the
process() function. If there’s still items left to process, another
timer is used to repeat the process.
Have a look at Nick Zakas's chunk method defined here, if you need to run the loop in sections to reduce the chance of crashing the browser:
function chunk(array, process, context){
setTimeout(function(){
var item = array.shift();
process.call(context, item);
if (array.length > 0){
setTimeout(arguments.callee, 100);
}
}, 100);
}
Using createDocumentFragment()
Since the document fragment is in memory and not part of the main DOM
tree, appending children to it does not cause page reflow (computation
of element's position and geometry). Consequently, using document
fragments often results in better performance.
DocumentFragment are supported in all browsers, even Internet Explorer
6, so there is no reason to not use them.
Reflow is the process by which the geometry of the layout engine's
formatting objects are computed.
Since you are changing the display property of these elements iteratively, the page mush 'repaint' the window for each change. If you use createDocumentFragment and make all the changes there, then push those to the DOM, you drastically reduce the amount of repainting necessary.
Firstly, where are the delays occurring - in the jquery code, or the data check? If it is the jquery, you could try detaching the data container element (ie the html element that contains all the .rr divs) from the DOM, make your changes, and then re-attach it. This will stop the browser re-processing the DOM after each change.
I would try
1) set display of the common parent element of all those divs, to "none"
2) loop through divs, setting display as appropriate
3) set parent element display back to block
I believe this will help because it gives the browser the opportunity to aggregate the rendering updates, instead of forcing it to fully complete each single time you change the display property. The visibility of all the child node are irrelevant if the parent isnt displayed, so the browser no longer has a need to render a change in each child until the parent becomes visible again.
ALso, I fail to see the purpose of you first looping through them all and setting them all to block before you loop again and set them to their intended value.
Don't use jQuery here, jQuery will just slow things down here.
var elements = document.getElementsByClassName('rr'),
len = elements.length;
for(var i = 0; i < len; i++)
{
var ele = elements[i];
if(ele.innerHTML.search(/01046/) != -1)
ele.style.display = "none";
}
This should be much faster.
I'm also having performance problems while looping through roughly 1500 items.
As you might have guessed, the loop itself isn't the bottleneck. It's the operation you do within it that's the problem.
So, what I migrated the load using setTimeout. Not the prettiest of solutions but it makes the browser responsive between the updates.
var _timeout_ = 0;
for(var i=0;i < nodes.length; i++)
{
setTimeout(
(function(i)
{
return function()
{
if(stuff)
{
nodes.get(i).style.display="block";
}
else
{
nodes.get(i).style.display="none";
}
}
})(i),
_timeout_
);
_timeout_ += 4;
}
This will delay every update with 4 milliseconds, if the operation takes longer, the browser will become unresponsive. If the operation takes only 2 milisecond on your slowest browser, you can set it to 3, etc. just play around with it.
The while statement in this function runs too slow (prevents page load for 4-5 seconds) in IE/firefox, but fast in safari...
It's measuring pixel width of text on a page and truncating until text reaches ideal width:
function constrain(text, ideal_width){
$('.temp_item').html(text);
var item_width = $('span.temp_item').width();
var ideal = parseInt(ideal_width);
var smaller_text = text;
var original = text.length;
while (item_width > ideal) {
smaller_text = smaller_text.substr(0, (smaller_text.length-1));
$('.temp_item').html(smaller_text);
item_width = $('span.temp_item').width();
}
var final_length = smaller_text.length;
if (final_length != original) {
return (smaller_text + '…');
} else {
return text;
}
}
Any way to improve performance? How would I convert this to a bubble-sort function?
Thanks!
move the calls to $() outside of the loop, and store its result in a temporary variable. Running that function is going to be the slowest thing in your code, aside from the call to .html().
They work very very hard on making the selector engines in libraries fast, but it's still dog slow compared to normal javascript operations (like looking up a variable in the local scope) because it has to interact with the dom. Especially if you're using a class selector like that, jquery has to loop through basically every element in the document looking at each class attribute and running a regex on it. Every go round the loop! Get as much of that stuff out of your tight loops as you can. Webkit runs it fast because it has .getElementsByClassName while the other browsers don't. (yet).
Instead of removing one character at time until you find the ideal width, you could use a binary search.
I see that the problem is that you are constantly modifying the DOM in the loop, by setting the html of the temp_item, and then re reading the width.
I don't know the context of your problem, but trying to adjust the layout by measuring the rendered elements is not a good practice from my point of view.
Maybe you could approach the problem from a different angle. Truncating to a fixed width is common.
Other possibility (hack?) if dont have choices, could be to use the overflow css property of the container element and put the … in other element next to the text. Though i recommend you to rethink the need of solving the problem the way you are intending.
Hugo
Other than the suggestion by Breton, another possibility to speed up your algorithm would be to use a binary search on the text length. Currently you are decrementing the length by one character at a time - this is O(N) in the length of the string. Instead, use a search which will be O(log(N)).
Roughly speaking, something like this:
function constrain(text, ideal_width){
...
var temp_item = $('.temp_item');
var span_temp_item = $('span.temp_item');
var text_len_lower = 0;
var text_len_higher = smaller_text.length;
while (true) {
if (item_width > ideal)
{
// make smaller to the mean of "lower" and this
text_len_higher = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_lower)/2));
}
else
{
if (smaller_text.length>=text_len_higher) break;
// make larger to the mean of "higher" and this
text_len_lower = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_higher)/2));
}
temp_item.html(smaller_text);
item_width = span_temp_item.width();
}
...
}
One thing to note is that each time you add something to the DOM, or change the html in a node, the page has to redraw itself, which is an expensive operation. Moving any HTML updates outside of a loop might help speed things up quite a bit.
As other have mentioned, you could move the calls to $() to outside the loop. You can create a reference to the element, then just call the methods on it within the loop as 1800 INFORMATION mentioned.
If you use Firefox with the Firebug plugin, there's a great way of profiling the code to see what's taking the longest time. Just click profile under the first tab, do your action, then click profile again. It'll show a table with the time it took for each part of your code. Chances are you'll see a lot of things in the list that are in your js framework library; but you can isolate that as well with a little trial and error.