In my application (developed way back in 2006), the developers used dtree.js (Link) to render a hierarchy tree. The problem occurred when in 2010 the tree grew to 1300 nodes and depth of upto 13 levels. After this, the page started loading very slowly and in IE it gives the infamous "Stop running this script?" error. I want to improve the performance, but all my tricks have failed:
Caching variables, DOM elements.
Calculating array lengths outside loops.
Minimizing use of loops.
Apart from this, I tried to used setTimeout() to break the execution in smaller tasks, but I am not able to get it working as it has many restrictions. Also, I cannot move the rendering of tree to server side.
Any help is appreciated.
Thanks,
Sid
Typically what is slow in any browser is anything to do with the DOM.
If you can lazy-load any part of the tree's HTML representation, do it.
In general try to minimize the number of times your edit the DOM.
Example:
for(var i = 0; i < data.length; i += 1) {
dom_element.innerHTML += data.some_data;
}
vs
var string = "";
for(var i = 0; i < data.length; i += 1) {
string += data.some_data;
}
dom_element.innerHTML += string; // only one call to innerHTML, likely much faster!
innerHTML is also faster than using DOMDocument-style (document.createElement, element.append and so on)
Related
I want to use AJAX to load an htmlfile into a <div> I will then need to run jsMath on this. Everything I have done so far with innerHTML has been a paragraph or two, maybe a table and/or image. Nothing too fancy.
What potential problems may occur when I set innerHTML to an external 25k file, with all sorts of complex css formatting? (thanks to jsMath) I can't think of any other method of doing this, but need to know if there are any limitations.
Thanks in advance.
--Dave
I don't know about any browser specific size limits, but if you assign a string longer that 65536, Chrome splits it into many elem.childNodes, so you might have to loop over these nodes and concatenate them.
Run the below snipped in Chrome Dev Tools. It constructs a 160k string, but theDivElement.childNodes[0] gets clipped to 65536 chars.
var longString = '1234567890';
for (var i = 0; i < 14; ++i) {
longString = longString + longString;
}
console.log('The length of our long string: ' + longString.length);
var elem = document.createElement('div');
elem.innerHTML = longString;
var innerHtmlValue = elem.childNodes[0].nodeValue;
console.log('The length as innerHTML-childNodes[0]: ' + innerHtmlValue.length);
console.log('Num child nodes: ' + elem.childNodes.length);
Result: (Chrome version 39.0.2171.95 (64-bit), Linux Mint 17)
The length of our long string: 163840
The length as innerHTML-childNodes[0]: 65536
Num child nodes: 3
But in Firefox, innerHTML doesn't split the contents into many nodes: (Firefox version 34.0, Linux Mint 17)
"The length of our long string: 163840"
"The length as innerHTML-childNodes[0]: 163840"
"Num child nodes: 1"
So you'd need to take into account that different browsers handle childNodes differently, and perhaps iterate over all child nodes and concatenate. (I noticed this, because I tried to use innerHTML to unescape a > 100k HTML encoded string.)
In fact, in Firefox I can create an innerHTML-childNodes[0] of length 167 772 160, by looping to i < 24 above. But somewhere above this length, there is an InternalError: allocation size overflow error.
There's nothing to prevent you from doing this technically. The biggest issue will be page load time. Be sure to include some sort of indication that the data is loading or it will look like nothing's happening.
In the application I am currently working on, I have not had any problems in any browser setting innerHTML to a string of 30k or more. (Don't know what the limit is)
The only kind of limits that are on this type of thing are purely bandwidth and processor related. You should make sure you don't have a low timeout set on your ajax request. You should also test on some lower speed computers to see if there is a memory issue. Some old browsers can be pretty unforgiving of large objects in memory.
You'll probably want to profile this with a tool like dynatrace ajax or speed tracer to understand how setting innerHTML to a really huge value affects performance. You might want to compare it with another approach like putting the new content in an iframe, or paginating the content.
your limit will be most likely the download limit set from your web server. usually a couple of MBs.Several web frameworks allows increasing this size but you cant just do that because that would mean increase buffer size which is not a good thing.
I have a page with JavaScript which takes a very long time to run. I have profiled the code using Firefox and following is the output.
As you can see I have moved the time consuming lines to a method _doStuff, which seems to do a lot of Graphic related things. Following is the content of the _doStuff method.
_doStuff:function(tds,colHeaderTds,mainAreaTds ){
for (i = 1; i < tds.length; i++) {
if (colHeaderTds[i].offsetWidth <= tds[i].offsetWidth) {
colHeaderTds[i].style.width = tds[i].offsetWidth + "px";
}
mainAreaTds[i].style.width = colHeaderTds[i].offsetWidth + "px";
}
},
I am assuming that the time consuming Graphics sections are due to setting the widths of the elements. Is this observation correct? And how should I go about optimizing the code so that it would take less time to load the page?
Every iteration of your loop JS changes your DOM tree and forces browser to repaint it.
The good practice is to make a copy of your element, modify it in the loop, and after loop change the .innerHTML of the former element.
More reading about repaints on the topic here
I am working on a terminal emulator for fun and have the basics of the backend up and running. However I keep running into performance problems on the frontend.
As you all probably know is that each character in a terminal window can have a different style. (color, backdrop, bold, underline etc). So my idea is was to use a <span> for each character in the view window and apply an inline style if necesary so I have the degree of control I need.
The problem is that the performance is horrendous on a refresh. Chrome can handle it on my pc with about 120 ops per second and firefox with 80. But internet explorer barely gets 6. So after my stint with html I tried to use canvas but the text on a canvas is ultra slow. Online I read caching helps so I implement a cache for each character and could apply colors to the then bitmapped font with some composite operation. However this is way way slower than DOM.
Then I went back to the dom and tried using document.createDocumentFragment but it performs a little bit worse then just using the standard.
I have no idea on where to begin optimization now. I could keep track on what character changes when but then I will still run into this slowness when the terminal gets a lot of input.
I am new to the DOM so I might do something completely wrong...
any help is appreciated!
Here is a jsperf with a few testcases:
http://jsperf.com/canvas-bitma32p-cache-test/6
Direct insertion of HTML as string text is surprisingly efficient when you use insertAdjacentHTML to append the HTML to an element.
var div = document.getElementById("output");
var charCount = 50;
var line, i, j;
for (i = 0; i < charCount; i++) {
line = "";
for (j = 0; j < charCount; j++) {
line += "<span style=\"background-color:rgb(0,0,255);color:rgb(255,127,0)\">o</span>";
}
div.insertAdjacentHTML("beforeend","<div>"+line+"</div>");
}
#output{font-family:courier; font-size:6pt;}
<div id="output"></div>
The downside of this approach is obvious: you never get the chance to treat each appended element as an object in JavaScript (they're just plain strings), so you can't, for example, directly attach an event listener to each of them. (You could do so after the fact by querying the resultant HTML for matching elements using document.querySelectorAll(".css selector).)
If you're truly just formatting output being printed to the screen, insertAdjacentHTML is perfect.
My web page creates a lot of DOM elements at once in a (batch) tight loop, depending on data fed by my Comet web server.
I tried several methods to create those elements. Basically it boils down to either (1):
var container = $('#selector');
for (...) container.append('<html code of the element>');
or (2):
var html = '';
for (...) html += '<html code of the element>';
$('#selector').append(html);
or (3):
var html = [];
for (...) html.push('<html code of the element>');
$('#selector').append(html.join(''));
Performance-wise, (1) is absolutely awful (3s per batch on a desktop computer, up to 5mn on a Galaxy Note fondleslab), and (2) and (3) are roughly equivalent (300ms on desktop, 1.5s on fondleslab). Those timings are for about 4000 elements, which is about 1/4 of what I expect in production and this is not acceptable since I should be handle this amount of data (15k elements) in under 1s, even on fondleslab.
The very fact that (2) and (3) have the same performance makes me think that I'm hitting the infamous "naively concatenating strings uselessly reallocates and copies lots of memory" problem (even though I'd expect join() to be smarter than that). [edit: after looking more closely into it, it happens that I was misled about that, the problem is more on the rendering side -- thanks DanC]
In C++ I'd just go with std::string::reserve() and operator += to avoid the useless reallocations, but I have no idea how to do that in Javascript.
Any idea how to improve the performance further? Or at least point me to ways to identify the bottleneck (even though I'm pretty sure it's the string concatenations). I'm certainly no Javascript guru...
Thanks for reading me.
For what it's worth, that huge number of elements is because I'm drawing a (mostly real-time) graph using DIV's. I'm well aware of Canvas but my app has to be compatible with old browsers so unfortunately it's not an option. :(
Using DOM methods, building and appending 12000 elements clocks in around 55ms on my dual-core MacBook.
document.getElementById('foo').addEventListener('click', function () {
build();
}, false);
function build() {
console.time('build');
var fragment = document.createDocumentFragment();
for ( var e = 0; e < 12000; e++ ) {
var el = document.createElement('div');
el.appendChild(document.createTextNode(e));
fragment.appendChild(el);
}
document.querySelectorAll('body')[0].appendChild(fragment);
console.timeEnd('build')
}
Fiddle
Resig on document.createDocumentFragment
This is not a solution to the performance problem, but only a way to ensure the UI loop is free to handle other requests.
You could try something like this:
var container = $('#selector');
for (...) setTimeout(function() {container.append('<html code of the element>') };
To be slightly more performant, I would actually call setTimeout after every x iterations after building up a larger string. And, not having tried this myself, I am not sure if the ordering of setTimeout calls will be preserved. If not, then you can do something more like this:
var arrayOfStrings = 'each element is a batch of 100 or so elements html';
function processNext(arr, i) {
container.append(arr[i]);
if (i < arr.length) {
setTimeout(function() { processNext(arr, i+1); });
}
}
processNext(arrayOfStrings, 0);
Not pretty, but would ensure the UI is not locked up while the DOM is manipulated.
We are using Bing and/or Google javascript map controls, sometimes with large numbers of dynamically alterable overlays.
I have read http://support.microsoft.com/kb/175500/en-us and know how to set the MaxScriptStatments registry key.
Problem is we do not want to programmatically set this or any other registry key on users' computers but would rather achieve the same effect some other way.
Is there another way?
Hardly anything you can do besides making your script "lighter". Try to profile it and figure out where the heaviest crunching takes place, then try to optimize those parts, break them down into smaller components, call the next component with a timeout after the previous one has finished and so on. Basically, give the control back to the browser every once in a while, don't crunch everything in one function call.
Generally a long running script is encountered in code that is looping.
If you're having to loop over a large collection of data and it can be done asynchronously--akin to another thread then move the processing to a webworker(http://www.w3schools.com/HTML/html5_webworkers.asp).
If you cannot or do not want to use a webworker then you can find your main loop that is causing the long running script and you can give it a max number of loops and then cause it to yield back to the client using setTimeout.
Bad: (thingToProcess may be too large, resulting in a long running script)
function Process(thingToProcess){
var i;
for(i=0; i < thingToProcess.length; i++){
//process here
}
}
Good: (only allows 100 iterations before yielding back)
function Process(thingToProcess, start){
var i;
if(!start) start = 0;
for(i=start; i < thingToProcess.length && i - start < 100; i++){
//process here
}
if(i < thingToProcess.length) //still more to process
setTimeout(function(){Process(thingToProcess, i);}, 0);
}
Both can be called in the same way:
Process(myCollectionToProcess);