Improving Efficiency in jQuery function - javascript

The while statement in this function runs too slow (prevents page load for 4-5 seconds) in IE/firefox, but fast in safari...
It's measuring pixel width of text on a page and truncating until text reaches ideal width:
function constrain(text, ideal_width){
$('.temp_item').html(text);
var item_width = $('span.temp_item').width();
var ideal = parseInt(ideal_width);
var smaller_text = text;
var original = text.length;
while (item_width > ideal) {
smaller_text = smaller_text.substr(0, (smaller_text.length-1));
$('.temp_item').html(smaller_text);
item_width = $('span.temp_item').width();
}
var final_length = smaller_text.length;
if (final_length != original) {
return (smaller_text + '…');
} else {
return text;
}
}
Any way to improve performance? How would I convert this to a bubble-sort function?
Thanks!

move the calls to $() outside of the loop, and store its result in a temporary variable. Running that function is going to be the slowest thing in your code, aside from the call to .html().
They work very very hard on making the selector engines in libraries fast, but it's still dog slow compared to normal javascript operations (like looking up a variable in the local scope) because it has to interact with the dom. Especially if you're using a class selector like that, jquery has to loop through basically every element in the document looking at each class attribute and running a regex on it. Every go round the loop! Get as much of that stuff out of your tight loops as you can. Webkit runs it fast because it has .getElementsByClassName while the other browsers don't. (yet).

Instead of removing one character at time until you find the ideal width, you could use a binary search.

I see that the problem is that you are constantly modifying the DOM in the loop, by setting the html of the temp_item, and then re reading the width.
I don't know the context of your problem, but trying to adjust the layout by measuring the rendered elements is not a good practice from my point of view.
Maybe you could approach the problem from a different angle. Truncating to a fixed width is common.
Other possibility (hack?) if dont have choices, could be to use the overflow css property of the container element and put the … in other element next to the text. Though i recommend you to rethink the need of solving the problem the way you are intending.
Hugo

Other than the suggestion by Breton, another possibility to speed up your algorithm would be to use a binary search on the text length. Currently you are decrementing the length by one character at a time - this is O(N) in the length of the string. Instead, use a search which will be O(log(N)).
Roughly speaking, something like this:
function constrain(text, ideal_width){
...
var temp_item = $('.temp_item');
var span_temp_item = $('span.temp_item');
var text_len_lower = 0;
var text_len_higher = smaller_text.length;
while (true) {
if (item_width > ideal)
{
// make smaller to the mean of "lower" and this
text_len_higher = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_lower)/2));
}
else
{
if (smaller_text.length>=text_len_higher) break;
// make larger to the mean of "higher" and this
text_len_lower = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_higher)/2));
}
temp_item.html(smaller_text);
item_width = span_temp_item.width();
}
...
}

One thing to note is that each time you add something to the DOM, or change the html in a node, the page has to redraw itself, which is an expensive operation. Moving any HTML updates outside of a loop might help speed things up quite a bit.
As other have mentioned, you could move the calls to $() to outside the loop. You can create a reference to the element, then just call the methods on it within the loop as 1800 INFORMATION mentioned.
If you use Firefox with the Firebug plugin, there's a great way of profiling the code to see what's taking the longest time. Just click profile under the first tab, do your action, then click profile again. It'll show a table with the time it took for each part of your code. Chances are you'll see a lot of things in the list that are in your js framework library; but you can isolate that as well with a little trial and error.

Related

How can I optimize the following JavaScript which sets the CSS properties of Elements

I have a page with JavaScript which takes a very long time to run. I have profiled the code using Firefox and following is the output.
As you can see I have moved the time consuming lines to a method _doStuff, which seems to do a lot of Graphic related things. Following is the content of the _doStuff method.
_doStuff:function(tds,colHeaderTds,mainAreaTds ){
for (i = 1; i < tds.length; i++) {
if (colHeaderTds[i].offsetWidth <= tds[i].offsetWidth) {
colHeaderTds[i].style.width = tds[i].offsetWidth + "px";
}
mainAreaTds[i].style.width = colHeaderTds[i].offsetWidth + "px";
}
},
I am assuming that the time consuming Graphics sections are due to setting the widths of the elements. Is this observation correct? And how should I go about optimizing the code so that it would take less time to load the page?
Every iteration of your loop JS changes your DOM tree and forces browser to repaint it.
The good practice is to make a copy of your element, modify it in the loop, and after loop change the .innerHTML of the former element.
More reading about repaints on the topic here

html DOM node limits

I am working on a terminal emulator for fun and have the basics of the backend up and running. However I keep running into performance problems on the frontend.
As you all probably know is that each character in a terminal window can have a different style. (color, backdrop, bold, underline etc). So my idea is was to use a <span> for each character in the view window and apply an inline style if necesary so I have the degree of control I need.
The problem is that the performance is horrendous on a refresh. Chrome can handle it on my pc with about 120 ops per second and firefox with 80. But internet explorer barely gets 6. So after my stint with html I tried to use canvas but the text on a canvas is ultra slow. Online I read caching helps so I implement a cache for each character and could apply colors to the then bitmapped font with some composite operation. However this is way way slower than DOM.
Then I went back to the dom and tried using document.createDocumentFragment but it performs a little bit worse then just using the standard.
I have no idea on where to begin optimization now. I could keep track on what character changes when but then I will still run into this slowness when the terminal gets a lot of input.
I am new to the DOM so I might do something completely wrong...
any help is appreciated!
Here is a jsperf with a few testcases:
http://jsperf.com/canvas-bitma32p-cache-test/6
Direct insertion of HTML as string text is surprisingly efficient when you use insertAdjacentHTML to append the HTML to an element.
var div = document.getElementById("output");
var charCount = 50;
var line, i, j;
for (i = 0; i < charCount; i++) {
line = "";
for (j = 0; j < charCount; j++) {
line += "<span style=\"background-color:rgb(0,0,255);color:rgb(255,127,0)\">o</span>";
}
div.insertAdjacentHTML("beforeend","<div>"+line+"</div>");
}
#output{font-family:courier; font-size:6pt;}
<div id="output"></div>
The downside of this approach is obvious: you never get the chance to treat each appended element as an object in JavaScript (they're just plain strings), so you can't, for example, directly attach an event listener to each of them. (You could do so after the fact by querying the resultant HTML for matching elements using document.querySelectorAll(".css selector).)
If you're truly just formatting output being printed to the screen, insertAdjacentHTML is perfect.

Page safety of setInterval

I have a piece of text that uses Webkit's background-clip:text feature and for pure aesthetics purposes, I want to include a script to scroll through the image on the text. The one way I have thought of doing this is using setInterval and running the function I have included below. I want to know if I will run into any problems of using setInterval since I have it looping at 50ms.
Function:
function movePos(){
var obj = document.querySelector('h1 span');
var pos = parseInt(obj.style.backgroundPositionY);
obj.style.backgroundPositionY = (pos + 1) + 'px';
}
Any modern browser with even modest hardware should be able to handle executing these three lines of code 20 times per second. You should be fine.
If you want to optimize, and the obj variable won't change, you could cache that value so you're not running a selector 20 times per second. I'm not certain, but this is probably the most CPU intensive of the three lines of code.

JS performance: create an awful lot of elements at once

My web page creates a lot of DOM elements at once in a (batch) tight loop, depending on data fed by my Comet web server.
I tried several methods to create those elements. Basically it boils down to either (1):
var container = $('#selector');
for (...) container.append('<html code of the element>');
or (2):
var html = '';
for (...) html += '<html code of the element>';
$('#selector').append(html);
or (3):
var html = [];
for (...) html.push('<html code of the element>');
$('#selector').append(html.join(''));
Performance-wise, (1) is absolutely awful (3s per batch on a desktop computer, up to 5mn on a Galaxy Note fondleslab), and (2) and (3) are roughly equivalent (300ms on desktop, 1.5s on fondleslab). Those timings are for about 4000 elements, which is about 1/4 of what I expect in production and this is not acceptable since I should be handle this amount of data (15k elements) in under 1s, even on fondleslab.
The very fact that (2) and (3) have the same performance makes me think that I'm hitting the infamous "naively concatenating strings uselessly reallocates and copies lots of memory" problem (even though I'd expect join() to be smarter than that). [edit: after looking more closely into it, it happens that I was misled about that, the problem is more on the rendering side -- thanks DanC]
In C++ I'd just go with std::string::reserve() and operator += to avoid the useless reallocations, but I have no idea how to do that in Javascript.
Any idea how to improve the performance further? Or at least point me to ways to identify the bottleneck (even though I'm pretty sure it's the string concatenations). I'm certainly no Javascript guru...
Thanks for reading me.
For what it's worth, that huge number of elements is because I'm drawing a (mostly real-time) graph using DIV's. I'm well aware of Canvas but my app has to be compatible with old browsers so unfortunately it's not an option. :(
Using DOM methods, building and appending 12000 elements clocks in around 55ms on my dual-core MacBook.
document.getElementById('foo').addEventListener('click', function () {
build();
}, false);
function build() {
console.time('build');
var fragment = document.createDocumentFragment();
for ( var e = 0; e < 12000; e++ ) {
var el = document.createElement('div');
el.appendChild(document.createTextNode(e));
fragment.appendChild(el);
}
document.querySelectorAll('body')[0].appendChild(fragment);
console.timeEnd('build')
}
Fiddle
Resig on document.createDocumentFragment
This is not a solution to the performance problem, but only a way to ensure the UI loop is free to handle other requests.
You could try something like this:
var container = $('#selector');
for (...) setTimeout(function() {container.append('<html code of the element>') };
To be slightly more performant, I would actually call setTimeout after every x iterations after building up a larger string. And, not having tried this myself, I am not sure if the ordering of setTimeout calls will be preserved. If not, then you can do something more like this:
var arrayOfStrings = 'each element is a batch of 100 or so elements html';
function processNext(arr, i) {
container.append(arr[i]);
if (i < arr.length) {
setTimeout(function() { processNext(arr, i+1); });
}
}
processNext(arrayOfStrings, 0);
Not pretty, but would ensure the UI is not locked up while the DOM is manipulated.

optimize jquery loop to set widths

I'm trying to stop strings in a div expanding beyond the size of their variable-sized parent divs using the tactic of setting an overflow and fixing the width. I have about 4400 dom elements on the page (can't be decreased and typically can be more), but only about 100-300 need to be changed. Of course, not a problem in FF/Webkit which can do that in less than a second, but IE is extraordinarily slow at over 7 seconds.
I've already eliminated any dom traversing by using an array of pre-determined id elements to address the tags in question. Is there something I'm missing or some alternative trick to do this in a shorter time for IE?
for (id in ids) {
jq("#" + ids[id] + "_name").css({"overflow": "hidden",
'width': jq("#" + ids[id]).innerWidth() - 1
});
}
Well, at the point of being right down to the metal of the DOM and still not eliminating any speed, I've gone for the alternative which is to mitigate the problem so it's less of a problem for the user (maybe IE9 will save MS from this sort of embarrasment!). I looked at this blog entry by Nick Fitzgerald which showed a technique for overcoming the issue. So here, using Nick's pattern, is my solution in the end (just the part for handling IE, I left the non-IE version as is):
yieldingEach(ids, function(namebox) {
var elemName = document.getElementById(namebox + '_name');
if (elemName) {
var elem = document.getElementById(namebox);
elemName.style.width = (elem.scrollWidth - 4) + 'px';
}
});
This is a non-jQuery version...verify it works in IE, but it would be slightly faster since you're not having jQuery do it for you.
for (id in ids) {
var elemName = document.getElementById(ids[id] + '_name'),
elem = document.getElementById(ids[id]);
elemName.setAttribute('overflow', 'hidden');
elemName.setAttribute('width', elem.innerWidth - 1);
}

Categories