On performance basics QuerySelector() of javascript or Find() of Jquery which is better to use in code on factors like speed and efficent access to Dom Elements
element = document.querySelector(selectors);
or
element= $(document).find(selectors);
querySelector is far more performant. It doesn't require a library nor the construction of a jQuery object.
Warning, the following will block your browser for a little bit, depending on your computer's specs:
const t0 = performance.now();
for (let i = 0; i < 1e6; i++) {
const div = document.querySelector('div');
}
const t1 = performance.now();
for (let i = 0; i < 1e6; i++) {
const div = $(document).find('div');
}
const t2 = performance.now();
console.log('querySelector: ' + (t1 - t0));
console.log('jQuery: ' + (t2 - t1));
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div>some div</div>
That said, performance for selecting a single element will rarely matter - I think it would only be something to consider if it's done in a nested loop, for example, and is done thousands of times in under a second.
Related
I know there is some functions made available to creating a node div with createElement(), but is there any other differences? performance? usability? etc...
String div
$('body').append("<div>foo</div>");
versus
Creating a node div
var newDiv = document.createElement("div");
var divContent = document.createTextNode("foo");
newDiv.appendChild(divContent);
$('body').append(newDiv);
Using the string (example 1) is actually better performing than creating nodes (second example). This is because jQuery already does what is in your node example for you, so you're sort of doing double work in the second example.
Because people didn't seem to believe me, here's a simple benchmark I just made and ran, doing 10,000 operations of each method:
var start = performance.now();
for (var i = 0; i < 10000; i++) {
$('body').append("<div>foo</div>");
}
console.log('Took ' + (performance.now() - start) + ' ms');
Took 70 ms averaged over 10 tests.
var start = performance.now();
for (var i = 0; i < 10000; i++) {
var newDiv = document.createElement("div");
var divContent = document.createTextNode("foo");
newDiv.appendChild(divContent);
$('body').append(newDiv);
}
console.log('Took ' + (performance.now() - start) + ' ms');
Took 110 ms averaged over 10 tests.
Keep in mind that this is all dependent on browser, jquery version, other javascript on the page, etc. That said, if you're appending a number of elements to the dom where this performance actually matters I'd be very surprised.
I've an ajax-call that will give me a 500 row return. Each row will create a HTML-object that will be added to the DOM. This all works fine, but it's slow.
I would like to add 20, then render what is done, and then continue to add the last 480. However, I can't figure out how to force rendering.
The code is something like this:
for (i = 0; i < 500; i += 1) {
$(newdata[i]).insertAfter('#object');
}
Where newdata is a textstring, for example
"<p>hello world</p>"
Edit
I might have left out some critical information in my post. The nodes are not to be inserted in order. It's a tree and each node has a parent that I know about. And each parent is garanteed to be inserted before the node. So I can't just append nodes after eachother since they might be in different branches.
Stop inserting one node at the time, insert collections of nodes instead.
It's not the loop that's slow, it's DOM manipulation that is slow, and inserting 500 DOM nodes one node at the time will be slow.
var nodes = $();
for (i = 0; i < 20; i++) {
nodes.append(newdata[i])
}
$('#object').after(nodes);
var more_nodes = $();
for (i = 20; i < 500; i++) {
more_nodes.append(newdata[i])
}
$('#object').after(more_nodes);
If you do it like this, it will probably be ten times faster, and you don't have to insert 20, then 480 etc.
Give the rendering code time to run. Write a few rows, call setInterval() to let other code run, and continue:
function renderLines(newdata) {
var len = newdata.length;
var sofar = 0;
var obj = $('#object');
var renderSome = function() {
for ( var i = 0; (i < 20) && ((i + sofar) < len); ++i )
{
$(newdata[i + sofar]).insertAfter(obj);
}
sofar += 20;
if (sofar < len)
setTimeout(renderSome, 10);
};
setTimeout(renderSome, 10);
}
I need to populate eight selectObject pulldown objects on a page with several thousand (8192) items each. I'm currently doing this in Javascript the only way I know how:
var iCount;
var option1;
var selectObject1 = document.getElementById('ifbchan');
for(iCount = 0; iCount < 8192; iCount++)
{
option1=document.createElement("option");
option1.text = "Out " + iCount;
option1.value=iCount;
try
{
selectObject1.add(option1, selectObject1.options[null]);
}
catch (e)
{
selectObject1.add(option1, null);
}
}
selectObject1.selectedIndex = 0;
This method works properly but is extremely slow! Each of these 8K loops takes something like 10 seconds to complete. Multiply by 8 different loops and the problem is obvious. Is there any other way to add large numbers of items to a drop down list that would be faster? Any faster alternatives to the drop down control for presenting a large list of items? Thanks for any ideas.
~Tim
I'd try the following:
var elements = ""
var i;
for(i= 0; i < 8192; i++){
elements += "<option value='"+ i + "'>Out " + i + "</option>";
}
document.getElementById("ifbchan").innerHTML = elements;
This way you only perform one action on the DOM per loop not 8000+.
Oh and here's one I prepared earlier: http://jsfiddle.net/3Ub4x/
Few things before the answer.
First of all I do not think that the best way to do this is a server side implementation. If you can do something on the client you should do this and not touch your server (if it is not security related).
Second thing - why exactly do you need 8000 elements in select list? Think as a user of your app, who would like to scroll through 8000 elements just to select his element? As it was mentioned before - autocomplete sounds much more suitable.
And right now is an answer:
Your original approach is here: it takes approximately 1724 miliseconds to complete for 10000 elements (You can see this by running the script and checking inspector).
var start = new Date();
var n = 10000;
var iCount;
var option1;
var selectObject1 = document.getElementById('ifbchan');
for(iCount = 0; iCount < n; iCount++)
{
option1=document.createElement("option");
option1.text = "Out " + iCount;
option1.value=iCount;
try
{
selectObject1.add(option1, selectObject1.options[null]);
}
catch (e)
{
selectObject1.add(option1, null);
}
}
selectObject1.selectedIndex = 0;
var time = new Date() - start;
console.log(time);
I do not like a lot of this code (it is too many lines) so I will rewrite it in jquery.
var start = new Date();
var n = 10000;
for (var i = 0; i<n; i++){
$("#ifbchan").append("<option value="+i+">"+i+"</option>")
}
var time = new Date() - start;
console.log(time);
The next fiddle is here. Much less lines, and some time improvement. Now it is 1312 milliseconds. But it append new element in every loop.
The next fiddle get rid of this.
var start = new Date();
var n = 10000;
var html = '';
for (var i = 0; i<n; i++){
html += "<option value="+i+">"+i+"</option>";
}
$("#ifbchan").append(html);
var time = new Date() - start;
console.log(time);
Wow, now it is only 140 milliseconds.
for (var i = 0; i<n; i++){
select.append('<option value='+i+'>'+i+'</option>');
}
Beware, this doesn't work in IE. See this link -
Using innerHTML to Update a SELECT – Differences between IE and FF
Consider two versions of the same loop iteration:
for (var i = 0; i < nodes.length; i++) {
...
}
and
var len = nodes.length;
for (var i = 0; i < len; i++) {
...
}
Is the latter version anyhow faster than the former one?
The accepted answer is not right because any decent engine should be able to hoist the property load out of the loop with so simple loop bodies.
See this jsperf - at least in V8 it is interesting to see how actually storing it in a variable changes the register allocation - in the code where variable is used the sum variable is stored on the stack whereas with the array.length-in-a-loop-code it is stored in a register. I assume something similar is happening in SpiderMonkey and Opera too.
According to the author, JSPerf is used incorrectly, 70% of the time. These broken jsperfs as given in all answers here give misleading results and people draw wrong conclusions from them.
Some red flags are putting code in the test cases instead of functions, not testing the result for correctness or using some mechanism of eliminating dead code elimination, defining function in setup or test cases instead of global.. For consistency you will want to warm-up the test functions before any benchmark too, so that compiling doesn't happen in the timed section.
Update: 16/12/2015
As this answer still seems to get a lot of views I wanted to re-examine the problem as browsers and JS engines continue to evolve.
Rather than using JSPerf I've put together some code to loop through arrays using both methods mentioned in the original question. I've put the code into functions to break down the functionality as would hopefully be done in a real world application:
function getTestArray(numEntries) {
var testArray = [];
for (var i = 0; i < numEntries; i++) {
testArray.push(Math.random());
}
return testArray;
}
function testInVariable(testArray) {
for (var i = 0; i < testArray.length; i++) {
doSomethingAwesome(testArray[i]);
}
}
function testInLoop(testArray) {
var len = testArray.length;
for (var i = 0; i < len; i++) {
doSomethingAwesome(testArray[i]);
}
}
function doSomethingAwesome(i) {
return i + 2;
}
function runAndAverageTest(testToRun, testArray, numTimesToRun) {
var totalTime = 0;
for (var i = 0; i < numTimesToRun; i++) {
var start = new Date();
testToRun(testArray);
var end = new Date();
totalTime += (end - start);
}
return totalTime / numTimesToRun;
}
function runTests() {
var smallTestArray = getTestArray(10000);
var largeTestArray = getTestArray(10000000);
var smallTestInLoop = runAndAverageTest(testInLoop, smallTestArray, 5);
var largeTestInLoop = runAndAverageTest(testInLoop, largeTestArray, 5);
var smallTestVariable = runAndAverageTest(testInVariable, smallTestArray, 5);
var largeTestVariable = runAndAverageTest(testInVariable, largeTestArray, 5);
console.log("Length in for statement (small array): " + smallTestInLoop + "ms");
console.log("Length in for statement (large array): " + largeTestInLoop + "ms");
console.log("Length in variable (small array): " + smallTestVariable + "ms");
console.log("Length in variable (large array): " + largeTestVariable + "ms");
}
console.log("Iteration 1");
runTests();
console.log("Iteration 2");
runTests();
console.log("Iteration 3");
runTests();
In order to achieve as fair a test as possible each test is run 5 times and the results averaged. I've also run the entire test including generation of the array 3 times. Testing on Chrome on my machine indicated that the time it took using each method was almost identical.
It's important to remember that this example is a bit of a toy example, in fact most examples taken out of the context of your application are likely to yield unreliable information because the other things your code is doing may be affecting the performance directly or indirectly.
The bottom line
The best way to determine what performs best for your application is to test it yourself! JS engines, browser technology and CPU technology are constantly evolving so it's imperative that you always test performance for yourself within the context of your application. It's also worth asking yourself whether you have a performance problem at all, if you don't then time spent making micro optimizations that are imperceptible to the user could be better spent fixing bugs and adding features, leading to happier users :).
Original Answer:
The latter one would be slightly faster. The length property does not iterate over the array to check the number of elements, but every time it is called on the array, that array must be dereferenced. By storing the length in a variable the array dereference is not necessary each iteration of the loop.
If you're interested in the performance of different ways of looping through an array in javascript then take a look at this jsperf
According to w3schools "Reduce Activity in Loops" the following is considered bad code:
for (i = 0; i < arr.length; i++) {
And the following is considered good code:
var arrLength = arr.length;
for (i = 0; i < arrLength; i++) {
Since accessing the DOM is slow, the following was written to test the theory:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>my test scripts</title>
</head>
<body>
<button onclick="initArray()">Init Large Array</button>
<button onclick="iterateArraySlowly()">Iterate Large Array Slowly</button>
<button onclick="iterateArrayQuickly()">Iterate Large Array Quickly</button>
<p id="slow">Slow Time: </p>
<p id="fast">Fast Time: </p>
<p id="access"></p>
<script>
var myArray = [];
function initArray(){
var length = 1e6;
var i;
for(i = 0; i < length; i++) {
myArray[i] = i;
}
console.log("array size: " + myArray.length);
}
function iterateArraySlowly() {
var t0 = new Date().getTime();
var slowText = "Slow Time: "
var i, t;
var elm = document.getElementById("slow");
for (i = 0; i < myArray.length; i++) {
document.getElementById("access").innerHTML = "Value: " + i;
}
t = new Date().getTime() - t0;
elm.innerHTML = slowText + t + "ms";
}
function iterateArrayQuickly() {
var t0 = new Date().getTime();
var fastText = "Fast Time: "
var i, t;
var elm = document.getElementById("fast");
var length = myArray.length;
for (i = 0; i < length; i++) {
document.getElementById("access").innerHTML = "Value: " + i;
}
t = new Date().getTime() - t0;
elm.innerHTML = fastText + t + "ms";
}
</script>
</body>
</html>
The interesting thing is that the iteration executed first always seems to win out over the other. But what is considered "bad code" seems to win the majority of the time after each have been executed a few times. Perhaps someone smarter than myself can explain why. But for now, syntax wise I'm sticking to what is more legible for me:
for (i = 0; i < arr.length; i++) {
if nodes is DOM nodeList then the second loop will be much much faster because in the first loop you lookup DOM (very costly) at each iteration. jsperf
This has always been the most performant on any benchmark test that I've used.
for (i = 0, val; val = nodes[i]; i++) {
doSomethingAwesome(val);
}
I believe that the nodes.length is already defined and is not being recalculated on each use. So the first example would be faster because it defined one less variable. Though the difference would be unnoticable.
How do I get access to live DOM collections from jQuery?
Take for example this HTML <div id='a'></div> and this JavaScript code:
var a = $('#a');
var divs = a[0].getElementsByTagName('DIV');
for(var i=0; divs.length < 20; ) {
a.append($('<div>'+(i++)+'</div>'));
}
It will append 20 div children to <div id='a'> because divs is a live collection.
Is there anything in jQuery that I could replace the second line with to get the same result?
var divs = $('#a div'); results in infinite loop.
JSFiddle is here.
In case #a already contains divs:
var $a = $('#a'),
toAdd = Math.max(20 - $a.find('div').length, 0);
for (var i = 0; i < toAdd; i++) {
$a.append('<div>' + i + '</div>');
}
That would be equivalent to the code above.
Live Collections - the true ones, are not something which can be returned by modern jquery.
Moreover, modern method which is intended to replace in nearest future getElementsByTagName, getQuerySelectorAll also return a static collection.
This is the answer to question you've stated.
As for the question you've really wanted to ask, other users already tried to provide you some help.
Select the element each time, this will create a new jQuery object. Which I think it the only way if the element is changing.
var a = $('#a');
for(var i=0; $('#a div').length < 20; ) {
a.append($('<div>'+(i++)+'</div>'));
if(i==50) break;
}
EDIT:
Or this:
for(var i=0, a=$('#a'); a.children('div').length < 20; ) {
a.append($('<div>'+(i++)+'</div>'));
if(i==50) break;
}
Or this, just one selector:
var a = $('#a');
var length = a.children('div').length;
while(length < 20) {
a.append($('<div>'+(length++)+'</div>'));
}
How to get DOM live collections with jQuery?
That’s not possible.
This has the same effect as your example code, though:
var $a = $('#a');
for (var i = 0; i < 20; i++) {
$a.append('<div>' + i + '</div>');
}
http://jsfiddle.net/mathias/Af538/
Update: If the code needs to be repeated periodically, you could use something like this:
var $a = $('#a'),
toAdd = Math.max(20 - $('div', $a).length, 0),
i;
for (i = 0; i < toAdd; i++) {
$a.append('<div>' + i + '</div>');
}
http://jsfiddle.net/mathias/S5C6n/
Is it always 20 div children ?
Isn't it better to use the following
var a = $('#a div');
for(var i=0; i < 20; i++) {
a.append($('<div>'+(i)+'</div>'));
}
The syntax you're looking for is:
var divs = $('div#a');
Since IDs are supposed to be unique, you could just do:
var divs = $('#a');