innerHTML size limit - javascript

I want to use AJAX to load an htmlfile into a <div> I will then need to run jsMath on this. Everything I have done so far with innerHTML has been a paragraph or two, maybe a table and/or image. Nothing too fancy.
What potential problems may occur when I set innerHTML to an external 25k file, with all sorts of complex css formatting? (thanks to jsMath) I can't think of any other method of doing this, but need to know if there are any limitations.
Thanks in advance.
--Dave

I don't know about any browser specific size limits, but if you assign a string longer that 65536, Chrome splits it into many elem.childNodes, so you might have to loop over these nodes and concatenate them.
Run the below snipped in Chrome Dev Tools. It constructs a 160k string, but theDivElement.childNodes[0] gets clipped to 65536 chars.
var longString = '1234567890';
for (var i = 0; i < 14; ++i) {
longString = longString + longString;
}
console.log('The length of our long string: ' + longString.length);
var elem = document.createElement('div');
elem.innerHTML = longString;
var innerHtmlValue = elem.childNodes[0].nodeValue;
console.log('The length as innerHTML-childNodes[0]: ' + innerHtmlValue.length);
console.log('Num child nodes: ' + elem.childNodes.length);
Result: (Chrome version 39.0.2171.95 (64-bit), Linux Mint 17)
The length of our long string: 163840
The length as innerHTML-childNodes[0]: 65536
Num child nodes: 3
But in Firefox, innerHTML doesn't split the contents into many nodes: (Firefox version 34.0, Linux Mint 17)
"The length of our long string: 163840"
"The length as innerHTML-childNodes[0]: 163840"
"Num child nodes: 1"
So you'd need to take into account that different browsers handle childNodes differently, and perhaps iterate over all child nodes and concatenate. (I noticed this, because I tried to use innerHTML to unescape a > 100k HTML encoded string.)
In fact, in Firefox I can create an innerHTML-childNodes[0] of length 167 772 160, by looping to i < 24 above. But somewhere above this length, there is an InternalError: allocation size overflow error.

There's nothing to prevent you from doing this technically. The biggest issue will be page load time. Be sure to include some sort of indication that the data is loading or it will look like nothing's happening.

In the application I am currently working on, I have not had any problems in any browser setting innerHTML to a string of 30k or more. (Don't know what the limit is)

The only kind of limits that are on this type of thing are purely bandwidth and processor related. You should make sure you don't have a low timeout set on your ajax request. You should also test on some lower speed computers to see if there is a memory issue. Some old browsers can be pretty unforgiving of large objects in memory.

You'll probably want to profile this with a tool like dynatrace ajax or speed tracer to understand how setting innerHTML to a really huge value affects performance. You might want to compare it with another approach like putting the new content in an iframe, or paginating the content.

your limit will be most likely the download limit set from your web server. usually a couple of MBs.Several web frameworks allows increasing this size but you cant just do that because that would mean increase buffer size which is not a good thing.

Related

Javascript - Chrome: childnodes maximum length [duplicate]

I want to use AJAX to load an htmlfile into a <div> I will then need to run jsMath on this. Everything I have done so far with innerHTML has been a paragraph or two, maybe a table and/or image. Nothing too fancy.
What potential problems may occur when I set innerHTML to an external 25k file, with all sorts of complex css formatting? (thanks to jsMath) I can't think of any other method of doing this, but need to know if there are any limitations.
Thanks in advance.
--Dave
I don't know about any browser specific size limits, but if you assign a string longer that 65536, Chrome splits it into many elem.childNodes, so you might have to loop over these nodes and concatenate them.
Run the below snipped in Chrome Dev Tools. It constructs a 160k string, but theDivElement.childNodes[0] gets clipped to 65536 chars.
var longString = '1234567890';
for (var i = 0; i < 14; ++i) {
longString = longString + longString;
}
console.log('The length of our long string: ' + longString.length);
var elem = document.createElement('div');
elem.innerHTML = longString;
var innerHtmlValue = elem.childNodes[0].nodeValue;
console.log('The length as innerHTML-childNodes[0]: ' + innerHtmlValue.length);
console.log('Num child nodes: ' + elem.childNodes.length);
Result: (Chrome version 39.0.2171.95 (64-bit), Linux Mint 17)
The length of our long string: 163840
The length as innerHTML-childNodes[0]: 65536
Num child nodes: 3
But in Firefox, innerHTML doesn't split the contents into many nodes: (Firefox version 34.0, Linux Mint 17)
"The length of our long string: 163840"
"The length as innerHTML-childNodes[0]: 163840"
"Num child nodes: 1"
So you'd need to take into account that different browsers handle childNodes differently, and perhaps iterate over all child nodes and concatenate. (I noticed this, because I tried to use innerHTML to unescape a > 100k HTML encoded string.)
In fact, in Firefox I can create an innerHTML-childNodes[0] of length 167 772 160, by looping to i < 24 above. But somewhere above this length, there is an InternalError: allocation size overflow error.
There's nothing to prevent you from doing this technically. The biggest issue will be page load time. Be sure to include some sort of indication that the data is loading or it will look like nothing's happening.
In the application I am currently working on, I have not had any problems in any browser setting innerHTML to a string of 30k or more. (Don't know what the limit is)
The only kind of limits that are on this type of thing are purely bandwidth and processor related. You should make sure you don't have a low timeout set on your ajax request. You should also test on some lower speed computers to see if there is a memory issue. Some old browsers can be pretty unforgiving of large objects in memory.
You'll probably want to profile this with a tool like dynatrace ajax or speed tracer to understand how setting innerHTML to a really huge value affects performance. You might want to compare it with another approach like putting the new content in an iframe, or paginating the content.
your limit will be most likely the download limit set from your web server. usually a couple of MBs.Several web frameworks allows increasing this size but you cant just do that because that would mean increase buffer size which is not a good thing.

html DOM node limits

I am working on a terminal emulator for fun and have the basics of the backend up and running. However I keep running into performance problems on the frontend.
As you all probably know is that each character in a terminal window can have a different style. (color, backdrop, bold, underline etc). So my idea is was to use a <span> for each character in the view window and apply an inline style if necesary so I have the degree of control I need.
The problem is that the performance is horrendous on a refresh. Chrome can handle it on my pc with about 120 ops per second and firefox with 80. But internet explorer barely gets 6. So after my stint with html I tried to use canvas but the text on a canvas is ultra slow. Online I read caching helps so I implement a cache for each character and could apply colors to the then bitmapped font with some composite operation. However this is way way slower than DOM.
Then I went back to the dom and tried using document.createDocumentFragment but it performs a little bit worse then just using the standard.
I have no idea on where to begin optimization now. I could keep track on what character changes when but then I will still run into this slowness when the terminal gets a lot of input.
I am new to the DOM so I might do something completely wrong...
any help is appreciated!
Here is a jsperf with a few testcases:
http://jsperf.com/canvas-bitma32p-cache-test/6
Direct insertion of HTML as string text is surprisingly efficient when you use insertAdjacentHTML to append the HTML to an element.
var div = document.getElementById("output");
var charCount = 50;
var line, i, j;
for (i = 0; i < charCount; i++) {
line = "";
for (j = 0; j < charCount; j++) {
line += "<span style=\"background-color:rgb(0,0,255);color:rgb(255,127,0)\">o</span>";
}
div.insertAdjacentHTML("beforeend","<div>"+line+"</div>");
}
#output{font-family:courier; font-size:6pt;}
<div id="output"></div>
The downside of this approach is obvious: you never get the chance to treat each appended element as an object in JavaScript (they're just plain strings), so you can't, for example, directly attach an event listener to each of them. (You could do so after the fact by querying the resultant HTML for matching elements using document.querySelectorAll(".css selector).)
If you're truly just formatting output being printed to the screen, insertAdjacentHTML is perfect.

JS performance: create an awful lot of elements at once

My web page creates a lot of DOM elements at once in a (batch) tight loop, depending on data fed by my Comet web server.
I tried several methods to create those elements. Basically it boils down to either (1):
var container = $('#selector');
for (...) container.append('<html code of the element>');
or (2):
var html = '';
for (...) html += '<html code of the element>';
$('#selector').append(html);
or (3):
var html = [];
for (...) html.push('<html code of the element>');
$('#selector').append(html.join(''));
Performance-wise, (1) is absolutely awful (3s per batch on a desktop computer, up to 5mn on a Galaxy Note fondleslab), and (2) and (3) are roughly equivalent (300ms on desktop, 1.5s on fondleslab). Those timings are for about 4000 elements, which is about 1/4 of what I expect in production and this is not acceptable since I should be handle this amount of data (15k elements) in under 1s, even on fondleslab.
The very fact that (2) and (3) have the same performance makes me think that I'm hitting the infamous "naively concatenating strings uselessly reallocates and copies lots of memory" problem (even though I'd expect join() to be smarter than that). [edit: after looking more closely into it, it happens that I was misled about that, the problem is more on the rendering side -- thanks DanC]
In C++ I'd just go with std::string::reserve() and operator += to avoid the useless reallocations, but I have no idea how to do that in Javascript.
Any idea how to improve the performance further? Or at least point me to ways to identify the bottleneck (even though I'm pretty sure it's the string concatenations). I'm certainly no Javascript guru...
Thanks for reading me.
For what it's worth, that huge number of elements is because I'm drawing a (mostly real-time) graph using DIV's. I'm well aware of Canvas but my app has to be compatible with old browsers so unfortunately it's not an option. :(
Using DOM methods, building and appending 12000 elements clocks in around 55ms on my dual-core MacBook.
document.getElementById('foo').addEventListener('click', function () {
build();
}, false);
function build() {
console.time('build');
var fragment = document.createDocumentFragment();
for ( var e = 0; e < 12000; e++ ) {
var el = document.createElement('div');
el.appendChild(document.createTextNode(e));
fragment.appendChild(el);
}
document.querySelectorAll('body')[0].appendChild(fragment);
console.timeEnd('build')
}
Fiddle
Resig on document.createDocumentFragment
This is not a solution to the performance problem, but only a way to ensure the UI loop is free to handle other requests.
You could try something like this:
var container = $('#selector');
for (...) setTimeout(function() {container.append('<html code of the element>') };
To be slightly more performant, I would actually call setTimeout after every x iterations after building up a larger string. And, not having tried this myself, I am not sure if the ordering of setTimeout calls will be preserved. If not, then you can do something more like this:
var arrayOfStrings = 'each element is a batch of 100 or so elements html';
function processNext(arr, i) {
container.append(arr[i]);
if (i < arr.length) {
setTimeout(function() { processNext(arr, i+1); });
}
}
processNext(arrayOfStrings, 0);
Not pretty, but would ensure the UI is not locked up while the DOM is manipulated.

Maximum size of an Array in Javascript

Context: I'm building a little site that reads an rss feed, and updates/checks the feed in the background. I have one array to store data to display, and another which stores ID's of records that have been shown.
Question: How many items can an array hold in Javascript before things start getting slow, or sluggish. I'm not sorting the array, but am using jQuery's inArray function to do a comparison.
The website will be left running, and updating and its unlikely that the browser will be restarted / refreshed that often.
If I should think about clearing some records from the array, what is the best way to remove some records after a limit, like 100 items.
The maximum length until "it gets sluggish" is totally dependent on your target machine and your actual code, so you'll need to test on that (those) platform(s) to see what is acceptable.
However, the maximum length of an array according to the ECMA-262 5th Edition specification is bound by an unsigned 32-bit integer due to the ToUint32 abstract operation, so the longest possible array could have 232-1 = 4,294,967,295 = 4.29 billion elements.
No need to trim the array, simply address it as a circular buffer (index % maxlen). This will ensure it never goes over the limit (implementing a circular buffer means that once you get to the end you wrap around to the beginning again - not possible to overrun the end of the array).
For example:
var container = new Array ();
var maxlen = 100;
var index = 0;
// 'store' 1538 items (only the last 'maxlen' items are kept)
for (var i=0; i<1538; i++) {
container [index++ % maxlen] = "storing" + i;
}
// get element at index 11 (you want the 11th item in the array)
eleventh = container [(index + 11) % maxlen];
// get element at index 11 (you want the 11th item in the array)
thirtyfifth = container [(index + 35) % maxlen];
// print out all 100 elements that we have left in the array, note
// that it doesn't matter if we address past 100 - circular buffer
// so we'll simply get back to the beginning if we do that.
for (i=0; i<200; i++) {
document.write (container[(index + i) % maxlen] + "<br>\n");
}
Like #maerics said, your target machine and browser will determine performance.
But for some real world numbers, on my 2017 enterprise Chromebook, running the operation:
console.time();
Array(x).fill(0).filter(x => x < 6).length
console.timeEnd();
x=5e4 takes 16ms, good enough for 60fps
x=4e6 takes 250ms, which is noticeable but not a big deal
x=3e7 takes 1300ms, which is pretty bad
x=4e7 takes 11000ms and allocates an extra 2.5GB of memory
So around 30 million elements is a hard upper limit, because the javascript VM falls off a cliff at 40 million elements and will probably crash the process.
EDIT: In the code above, I'm actually filling the array with elements and looping over them, simulating the minimum of what an app might want to do with an array. If you just run Array(2**32-1) you're creating a sparse array that's closer to an empty JavaScript object with a length, like {length: 4294967295}. If you actually tried to use all those 4 billion elements, you'll definitely crash the javascript process.
You could try something like this to test and trim the length:
http://jsfiddle.net/orolo/wJDXL/
var longArray = [1, 2, 3, 4, 5, 6, 7, 8];
if (longArray.length >= 6) {
longArray.length = 3;
}
alert(longArray); //1, 2, 3
I have built a performance framework that manipulates and graphs millions of datasets, and even then, the javascript calculation latency was on order of tens of milliseconds. Unless you're worried about going over the array size limit, I don't think you have much to worry about.
It will be very browser dependant. 100 items doesn't sound like a large number - I expect you could go a lot higher than that. Thousands shouldn't be a problem. What may be a problem is the total memory consumption.
I have shamelessly pulled some pretty big datasets in memory, and altough it did get sluggish it took maybe 15 Mo of data upwards with pretty intense calculations on the dataset. I doubt you will run into problems with memory unless you have intense calculations on the data and many many rows. Profiling and benchmarking with different mock resultsets will be your best bet to evaluate performance.

Performance issue in Javascript library dtree.js

In my application (developed way back in 2006), the developers used dtree.js (Link) to render a hierarchy tree. The problem occurred when in 2010 the tree grew to 1300 nodes and depth of upto 13 levels. After this, the page started loading very slowly and in IE it gives the infamous "Stop running this script?" error. I want to improve the performance, but all my tricks have failed:
Caching variables, DOM elements.
Calculating array lengths outside loops.
Minimizing use of loops.
Apart from this, I tried to used setTimeout() to break the execution in smaller tasks, but I am not able to get it working as it has many restrictions. Also, I cannot move the rendering of tree to server side.
Any help is appreciated.
Thanks,
Sid
Typically what is slow in any browser is anything to do with the DOM.
If you can lazy-load any part of the tree's HTML representation, do it.
In general try to minimize the number of times your edit the DOM.
Example:
for(var i = 0; i < data.length; i += 1) {
dom_element.innerHTML += data.some_data;
}
vs
var string = "";
for(var i = 0; i < data.length; i += 1) {
string += data.some_data;
}
dom_element.innerHTML += string; // only one call to innerHTML, likely much faster!
innerHTML is also faster than using DOMDocument-style (document.createElement, element.append and so on)

Categories