I have the following jQuery statement in a loop. #MainContent_gvDemographic and #tblFreez are two tables in a page.
$("#MainContent_gvDemographic").find(str)
.css("height", $("#tblFreez")
.find(str)
.css("height"))
When there are many steps in the loop, it takes a very long time to complete. To fix the problem, I then use two loops, one for reading the height of $("#tblFreez").find(str), the other for writing the height into $("#MainContent_gvDemographic").find(str), and use an array to carry the height data between the two loops. It becomes much faster now. Does anyone know why the two solutions have such big difference in performance? The computational complexity looks the same to me.
All right, here are the two complete version.
Original:
function FixHeight() {
var rowCount = $('#tblFreez tr').length;
for (var i = 0; i < rowCount; i++) {
var str = "";
if ($.browser.msie) {
str = "tr:eq(" + i + ") td";
}
else {
str = "tr:eq(" + i + ")";
}
$("#MainContent_gvDemographic").find(str).css("height", $("#tblFreez").find(str).css("height"));
}
}
New:
function FixHeight() {
var rowCount = $('#tblFreez tr').length;
var hei = new Array();
for (var i = 0; i < rowCount; i++) {
var str = "";
if ($.browser.msie) {
str = "tr:eq(" + i + ") td";
}
else {
str = "tr:eq(" + i + ")";
}
hei[i] = $("#tblFreez").find(str).css("height");
}
for (var i = 0; i < rowCount; i++) {
var str = "";
if ($.browser.msie) {
str = "tr:eq(" + i + ") td";
}
else {
str = "tr:eq(" + i + ")";
}
$("#MainContent_gvDemographic").find(str).css("height", hei[i]);
}
}
Why not use only one loop and not for but jQuery .each(). I haven't tested code below, but should work.
function FixHeight() {
var $MainContent = $("#MainContent_gvDemographic");
var $tblFreezRows = $("#tblFreez tr");
var hei, $row;
$tblFreezRows.each(function(index, elem){
$row = $(this);
if ($.browser.msie) {
hei = $row.find('td').css("height");
$MainContent.find("tr:eq(" + index + ") td").css("height", hei);
}
else {
hei = $row.css("height");
$MainContent.find("tr:eq(" + index + ")").css("height", hei);
}
});
}
The DOM operations are usually the expensive operations.
Your first version has heavy DOM operation, but your second version has the count. It looks like load vs number.
Ideally, your first version should be faster as it is just one loop and the number of iteration is half than the second version.. but it is not always the case.
Assume your memory is like 1000M in which 300M can be garbage collected meaning it can be cleaned up. So the Operating systems memory model would call the garbage collector whenever your memory gets closer to 1000M. With those conditions in mind, lets say in your first version, every 5 iterate takes 1000M, which would end up calling garbage collector to clean up or free up resources for next iteration. So if you end up running for 100 iteration which is equals to 100 iteration + 20 times GC processing.
In your second case, assume it take 50 iteration to fill up 1000M, so you would end up calling 4 time GC processing which is basically 20 vs 4 times calling some other process inbetween each of your iterate.
Above is just a speculation and the actual memory modules are much smarter than what I have explained, but that is just to give an idea of load vs numbers.
Anyways..Try below code and see if it fixes your problem,
Setting height at TR level
var fTable = document.getElementById('#tblFreez');
$("#MainContent_gvDemographic tr").each(function(ridx) {
$(this).height($(fTable.rows[ridx]).height());
});
Setting height at TD level
var fTable = document.getElementById('#tblFreez');
$("#MainContent_gvDemographic tr").each(function(ridx) {
$(this).find('td').each(function (cidx) {
$(this).height($(fTable.rows[ridx].cols[cidx]).height());
});
});
Try detaching the elements before you alter/search them. Then re-append them. DOM-operations are costly.
Related
using a for-loop in javascript i´m getting a not known amount of id´s, they´re not in an array but coming one by one.
is there a way to get an alert when there are no more id´s to retrieve meaning the for loop is done?
i can´t wrap my head around this, any help would be greatly appreciated.
thanks!
edited with code for clarification.
function iterateDevices(api) {
var count = api.getcount("devices"); var apiPath = dequotePath(api);
for (var i = 0; i < count; i++) {
var deviceApi = new LiveAPI(apiPath + " devices " + i); if (deviceApi) {
var deviceName = deviceApi.get("name"); var deviceid = deviceApi.id; //var
deviceName = deviceApi.get("parameters"); var className =
deviceApi.get("class_name"); var deviceApiPath = dequotePath(deviceApi);
var chainsCount; var chainApi; var j;
if ((className == "DrumGroupDevice") || (className ==
"AudioEffectGroupDevice") || (className == "InstrumentGroupDevice")){
//post(deviceName + " id " + deviceid + "\'\n"); //outlet(0,deviceid);
// arr.push(deviceName);
if (deviceApi.get("can_have_chains") == 1) { chainsCount =
deviceApi.getcount("chains"); // only racks have chains for (j = 0; j
< chainsCount; j++) { // post("id" + deviceid + " found device " +
deviceName + " at path \'" + deviceApiPath + "\'\n");
//outlet(0,deviceid); chainApi = new LiveAPI(deviceApiPath + " chains
" + j); iterateDevices(chainApi);
myFunction(); } chainsCount = deviceApi.getcount("return_chains");
// only racks have chains for (j = 0; j < chainsCount; j++) {
//post("2 found device " + deviceName + "id"+deviceid + " at path
\'" + deviceApiPath + "\'\n"); // outlet(0,deviceid); chainApi = new
LiveAPI(deviceApiPath + " return_chains " + j);
iterateDevices(chainApi);
}
}
}
}
}
} iterateDevices.local = 1;
The purpose of a for loop is to deal with a known number of iterations. If you want to deal with an unknown number of iterations, you would use a while loop.
Of course, this is programming, so lets look at the crazy things we can do:
Iterate over a collection. We dont necessarily know how many things
are in the collection, but we may want to iterate over all of them.
The number of things in the collection might even change as we're
iterating.
We can change how we iterate through the loop. That whole i++ part?
What if we replace it with i += 2? Or what if it's i -=1? Or i += j
and j changes while we're iterating?
We can change how we break out of the loop. You can put a break
statement in there to break out of the loop anytime. You can also
change the conditional of the loop. What if i < 100 is replaced by
i<j? Or what if we replace it with i<100 || q == true?
You may use a while loop instead of a for and insert a condition to terminate the loop.
Using pseudo-code, you could do something like:
other_ids = True // boolean var
while(other_ids) {
// do what you have to do
other_ids = FuncionToCheckWhetherThereAreNewIds
}
FuncionToCheckWhetherThereAreNewIds should be a function that gives you true if there are new ids and false if there are not.
I need to populate eight selectObject pulldown objects on a page with several thousand (8192) items each. I'm currently doing this in Javascript the only way I know how:
var iCount;
var option1;
var selectObject1 = document.getElementById('ifbchan');
for(iCount = 0; iCount < 8192; iCount++)
{
option1=document.createElement("option");
option1.text = "Out " + iCount;
option1.value=iCount;
try
{
selectObject1.add(option1, selectObject1.options[null]);
}
catch (e)
{
selectObject1.add(option1, null);
}
}
selectObject1.selectedIndex = 0;
This method works properly but is extremely slow! Each of these 8K loops takes something like 10 seconds to complete. Multiply by 8 different loops and the problem is obvious. Is there any other way to add large numbers of items to a drop down list that would be faster? Any faster alternatives to the drop down control for presenting a large list of items? Thanks for any ideas.
~Tim
I'd try the following:
var elements = ""
var i;
for(i= 0; i < 8192; i++){
elements += "<option value='"+ i + "'>Out " + i + "</option>";
}
document.getElementById("ifbchan").innerHTML = elements;
This way you only perform one action on the DOM per loop not 8000+.
Oh and here's one I prepared earlier: http://jsfiddle.net/3Ub4x/
Few things before the answer.
First of all I do not think that the best way to do this is a server side implementation. If you can do something on the client you should do this and not touch your server (if it is not security related).
Second thing - why exactly do you need 8000 elements in select list? Think as a user of your app, who would like to scroll through 8000 elements just to select his element? As it was mentioned before - autocomplete sounds much more suitable.
And right now is an answer:
Your original approach is here: it takes approximately 1724 miliseconds to complete for 10000 elements (You can see this by running the script and checking inspector).
var start = new Date();
var n = 10000;
var iCount;
var option1;
var selectObject1 = document.getElementById('ifbchan');
for(iCount = 0; iCount < n; iCount++)
{
option1=document.createElement("option");
option1.text = "Out " + iCount;
option1.value=iCount;
try
{
selectObject1.add(option1, selectObject1.options[null]);
}
catch (e)
{
selectObject1.add(option1, null);
}
}
selectObject1.selectedIndex = 0;
var time = new Date() - start;
console.log(time);
I do not like a lot of this code (it is too many lines) so I will rewrite it in jquery.
var start = new Date();
var n = 10000;
for (var i = 0; i<n; i++){
$("#ifbchan").append("<option value="+i+">"+i+"</option>")
}
var time = new Date() - start;
console.log(time);
The next fiddle is here. Much less lines, and some time improvement. Now it is 1312 milliseconds. But it append new element in every loop.
The next fiddle get rid of this.
var start = new Date();
var n = 10000;
var html = '';
for (var i = 0; i<n; i++){
html += "<option value="+i+">"+i+"</option>";
}
$("#ifbchan").append(html);
var time = new Date() - start;
console.log(time);
Wow, now it is only 140 milliseconds.
for (var i = 0; i<n; i++){
select.append('<option value='+i+'>'+i+'</option>');
}
Beware, this doesn't work in IE. See this link -
Using innerHTML to Update a SELECT – Differences between IE and FF
Consider two versions of the same loop iteration:
for (var i = 0; i < nodes.length; i++) {
...
}
and
var len = nodes.length;
for (var i = 0; i < len; i++) {
...
}
Is the latter version anyhow faster than the former one?
The accepted answer is not right because any decent engine should be able to hoist the property load out of the loop with so simple loop bodies.
See this jsperf - at least in V8 it is interesting to see how actually storing it in a variable changes the register allocation - in the code where variable is used the sum variable is stored on the stack whereas with the array.length-in-a-loop-code it is stored in a register. I assume something similar is happening in SpiderMonkey and Opera too.
According to the author, JSPerf is used incorrectly, 70% of the time. These broken jsperfs as given in all answers here give misleading results and people draw wrong conclusions from them.
Some red flags are putting code in the test cases instead of functions, not testing the result for correctness or using some mechanism of eliminating dead code elimination, defining function in setup or test cases instead of global.. For consistency you will want to warm-up the test functions before any benchmark too, so that compiling doesn't happen in the timed section.
Update: 16/12/2015
As this answer still seems to get a lot of views I wanted to re-examine the problem as browsers and JS engines continue to evolve.
Rather than using JSPerf I've put together some code to loop through arrays using both methods mentioned in the original question. I've put the code into functions to break down the functionality as would hopefully be done in a real world application:
function getTestArray(numEntries) {
var testArray = [];
for (var i = 0; i < numEntries; i++) {
testArray.push(Math.random());
}
return testArray;
}
function testInVariable(testArray) {
for (var i = 0; i < testArray.length; i++) {
doSomethingAwesome(testArray[i]);
}
}
function testInLoop(testArray) {
var len = testArray.length;
for (var i = 0; i < len; i++) {
doSomethingAwesome(testArray[i]);
}
}
function doSomethingAwesome(i) {
return i + 2;
}
function runAndAverageTest(testToRun, testArray, numTimesToRun) {
var totalTime = 0;
for (var i = 0; i < numTimesToRun; i++) {
var start = new Date();
testToRun(testArray);
var end = new Date();
totalTime += (end - start);
}
return totalTime / numTimesToRun;
}
function runTests() {
var smallTestArray = getTestArray(10000);
var largeTestArray = getTestArray(10000000);
var smallTestInLoop = runAndAverageTest(testInLoop, smallTestArray, 5);
var largeTestInLoop = runAndAverageTest(testInLoop, largeTestArray, 5);
var smallTestVariable = runAndAverageTest(testInVariable, smallTestArray, 5);
var largeTestVariable = runAndAverageTest(testInVariable, largeTestArray, 5);
console.log("Length in for statement (small array): " + smallTestInLoop + "ms");
console.log("Length in for statement (large array): " + largeTestInLoop + "ms");
console.log("Length in variable (small array): " + smallTestVariable + "ms");
console.log("Length in variable (large array): " + largeTestVariable + "ms");
}
console.log("Iteration 1");
runTests();
console.log("Iteration 2");
runTests();
console.log("Iteration 3");
runTests();
In order to achieve as fair a test as possible each test is run 5 times and the results averaged. I've also run the entire test including generation of the array 3 times. Testing on Chrome on my machine indicated that the time it took using each method was almost identical.
It's important to remember that this example is a bit of a toy example, in fact most examples taken out of the context of your application are likely to yield unreliable information because the other things your code is doing may be affecting the performance directly or indirectly.
The bottom line
The best way to determine what performs best for your application is to test it yourself! JS engines, browser technology and CPU technology are constantly evolving so it's imperative that you always test performance for yourself within the context of your application. It's also worth asking yourself whether you have a performance problem at all, if you don't then time spent making micro optimizations that are imperceptible to the user could be better spent fixing bugs and adding features, leading to happier users :).
Original Answer:
The latter one would be slightly faster. The length property does not iterate over the array to check the number of elements, but every time it is called on the array, that array must be dereferenced. By storing the length in a variable the array dereference is not necessary each iteration of the loop.
If you're interested in the performance of different ways of looping through an array in javascript then take a look at this jsperf
According to w3schools "Reduce Activity in Loops" the following is considered bad code:
for (i = 0; i < arr.length; i++) {
And the following is considered good code:
var arrLength = arr.length;
for (i = 0; i < arrLength; i++) {
Since accessing the DOM is slow, the following was written to test the theory:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>my test scripts</title>
</head>
<body>
<button onclick="initArray()">Init Large Array</button>
<button onclick="iterateArraySlowly()">Iterate Large Array Slowly</button>
<button onclick="iterateArrayQuickly()">Iterate Large Array Quickly</button>
<p id="slow">Slow Time: </p>
<p id="fast">Fast Time: </p>
<p id="access"></p>
<script>
var myArray = [];
function initArray(){
var length = 1e6;
var i;
for(i = 0; i < length; i++) {
myArray[i] = i;
}
console.log("array size: " + myArray.length);
}
function iterateArraySlowly() {
var t0 = new Date().getTime();
var slowText = "Slow Time: "
var i, t;
var elm = document.getElementById("slow");
for (i = 0; i < myArray.length; i++) {
document.getElementById("access").innerHTML = "Value: " + i;
}
t = new Date().getTime() - t0;
elm.innerHTML = slowText + t + "ms";
}
function iterateArrayQuickly() {
var t0 = new Date().getTime();
var fastText = "Fast Time: "
var i, t;
var elm = document.getElementById("fast");
var length = myArray.length;
for (i = 0; i < length; i++) {
document.getElementById("access").innerHTML = "Value: " + i;
}
t = new Date().getTime() - t0;
elm.innerHTML = fastText + t + "ms";
}
</script>
</body>
</html>
The interesting thing is that the iteration executed first always seems to win out over the other. But what is considered "bad code" seems to win the majority of the time after each have been executed a few times. Perhaps someone smarter than myself can explain why. But for now, syntax wise I'm sticking to what is more legible for me:
for (i = 0; i < arr.length; i++) {
if nodes is DOM nodeList then the second loop will be much much faster because in the first loop you lookup DOM (very costly) at each iteration. jsperf
This has always been the most performant on any benchmark test that I've used.
for (i = 0, val; val = nodes[i]; i++) {
doSomethingAwesome(val);
}
I believe that the nodes.length is already defined and is not being recalculated on each use. So the first example would be faster because it defined one less variable. Though the difference would be unnoticable.
Using jQuery .append I write some html to form a 10,000px grid of 125px X 80px. Where the pixels are numbered first down then across. Now this works fine but is slow enough that there's noticeable lag loading the page compared to writing it straight in html. Is it possible to speed this up at all while still maintaining the pixel numbering?
My html is:
<div id="grid">
</div>
Javascript:
function createGrid() {
var counter = 1;
var rowCounter = 1;
var divs = 10000;
$('<table width="625px"><tr>').appendTo('#grid');
for (var i = 1; i <= divs; i++) {
if (i % 125 == 0 ){
$('</ tr><tr>').appendTo('#grid');
rowCounter++;
counter = rowCounter;
}
else
$('<td id="pixel_' + counter + '" class="pixel"></td>').appendTo('#grid');
counter =+ 80;
}
$('</tr></table>').appendTo('#grid');
}
Your code won't work as you expect it to, because .append() creates complete DOM elements. $('<table width="625px"><tr>').appendTo('#grid') will automatically close both tags, and you'll have to append the next row to the table, and the cell to the row.
As it happens, it's inefficient to constantly append elements to the DOM anyway. Instead, build the table as a single string and write it out all at once. This is more efficient since you're only adding to the DOM one time.
function createGrid() {
var counter = 1;
var rowCounter = 1;
var divs = 10000;
var tstr = '<table width="625px"><tr>';
for (var i = 1; i <= divs; i++) {
if (i % 125 == 0) {
tstr += '</ tr><tr>';
rowCounter++;
counter = rowCounter;
} else
tstr += '<td id="pixel_' + counter + '" class="pixel"></td>';
counter = +80;
}
tstr += '</tr></table>';
$('#grid').append(tstr);
}
http://jsfiddle.net/mblase75/zuCCx/
$('<table width="625px"><tr>')
is not the same as writing and appending an HTML string! jQuery will evaluate that <table><tr> string and create a DOMElement from it. I.e., with just this tiny bit of code, you have created a whole table in the DOM. The closing tags are auto-completed and the table is instantiated. From then on you need to work with it as a DOM object, not as a string to append to.
Your code is probably slow because you're creating tons of incomplete/autocompleted tiny DOM objects which are all somehow being bunched together, probably not even in the correct structure. Either manipulate DOM objects, which should be pretty fast, or construct a complete string and have it evaluated once.
One of the first steps towards improving performance would be generating the complete HTML and appending to the DOM in one step.
function createGrid() {
var counter = 1;
var rowCounter = 1;
var divs = 10000;
var html = '<table width="625px"><tr>';
for (var i = 1; i <= divs; i++) {
if (i % 125 == 0 ){
html += '</ tr><tr>';
rowCounter++;
counter = rowCounter;
}
else
html += '<td id="pixel_' + counter + '" class="pixel"></td>';
counter =+ 80;
}
html += '</tr></table>';
$('#grid').html(html);
}
Is there a way to make the following code faster? It's becoming too slow, when length of array is more than 1000 records, especially in IE6.
dbusers = data.split(";");
$("#users").html("");
for (i = 0; i < dbusers.length; i++) {
if ($("#username").val() != "") {
if (dbusers[i].indexOf($("#username").val()) != -1) {
$("#users").append(dbusers[i] + "<br>");
}
} else {
$("#users").append(dbusers[i] + "<br>");
}
}
Minimize the amount of work you do in the loop. Don't add stuff to the DOM in the loop, create a string.
var dbusers = data.split(";");
var username = $("#username").val();
var userlist = "";
if (username == "") {
for (i = 0; i < dbusers.length; i++) {
userlist += dbusers[i] + "<br>";
}
} else {
for (i = 0; i < dbusers.length; i++) {
if (dbusers[i].indexOf(username) != -1) {
userlist += dbusers[i] + "<br>";
}
}
}
$("#users").html(userlist);
Faster than those by far (especially in IE!) is to build your string as an array (yes, really) and then concatenate it at the end:
var dbusers = data.split(";"), username = $('#username').val();
$("#users").html($.map(dbusers, function(_, dbuser) {
if (username == '' || dbuser.indexOf(username) > 0)
return dbuser + '<br>';
return '';
}).get().join(''));
The $.map() routine will build an array from the return values of the function you pass. Here, my function is returning the user string followed by the <br>. The resulting array is then turned into a string by calling the native join() routine. Especially when you've got like 1000 things to work with, this will be much faster than building a string with repeated calls to +=! Try the two versions and compare!
Use a document fragment.
You can perform more optimizations, too, like removing that nasty if and creating the nodes yourself.
var frag = document.createDocumentFragment(),
dbUsers = data.split(';'),
dbUsersLength = dbUsers.length,
curDbUser,
usernameVal = $('#username').val();
for(i = 0; i < dbUsersLength; ++i) {
curDbUser = dbUsers[i];
if(curDbUser.indexOf(usernameVal) !== -1) {
frag.appendChild(document.createTextNode(curDbUser));
frag.appendChild(document.createElement('br'));
}
}
$('#users').empty().append(frag);
I made a tool to benchmark all the current answers: http://dev.liranuna.com/strager/stee1rat.html
ghoppe's and mine seem to be the fastest.
IE6 doesn't support querySelector, so lookups can be particularly slow. Keep HTML manipulation within loops to a minimum by reducing the number of appends you do, each one has a regular expression run on it to extract the HTML and convert it to a DOM object. Also work in some micro optimisations where you can, might improve performance a little especially over thousands of iterations.
var usersEl = $("#users"); // reduce lookups to the #users element
var result = ""; // create a variable for the HTML string
var unameVal = $("#username").val(); // do the username value lookup only once
dbusers = data.split(";");
usersEl.html("");
// Store the length of the array in a var in your loop to prevent multiple lookups
for (var i = 0, max = dbusers.length; i < max; i++) {
if (unameVal !== "") {
if (dbusers[i].indexOf(unameVal) != -1) {
result += dbusers[i] + "<br>";
}
} else {
result += dbusers[i] + "<br>";
}
}
usersEl.html(result); // Set the HTML only once, saves multiple regexes