Are there any significant performance differences these two blocks of code?
var element = $("#some-element");
SomeMethod1(element);
SomeMethod2(element);
SomeMethod3(element);
and...
SomeMethod1($("#some-element"));
SomeMethod2($("#some-element"));
SomeMethod3($("#some-element"));
That depends on what you mean by significant.
The first code snippet will always be faster than the second, because calling $() more than once has a cost (as jQuery does not cache the results of previous calls). Whether it's significant or not depends on your performance requirements.
This does the element lookup in DOM and creates a jQuery object.
var element = $("#some-element");
In the first one, it reuses this object.
In second one, it has to do the lookup and creation 3 times, hence the first one is better performance wise.
Yes. First one is faster, second is slower
Why?
Because first searchs for element only once, second three times
It is not significant but not negligible also. In this case since you are using id selector it will not make much difference but if you going to use class selector or attribute selector then it will make a big difference.
The first one will always give you better performance than the second one because you are reusing the same object at multiple places.
Well, the first snippet reuses the first call to $("#some-element") whilst the other example needs to lookup the #some-element three times.
Latter:
3 string creations (the string literal holding the selector)
3 function calls to $(), each of which result in more function calls within
3 searches of the document for the same element
Given the type of selector being used, I wouldn't say it's significant, but the latter requires more processing time and uses more memory (see above list). And that stuff adds up over the size of an app.
With a less direct selector, it could become significant much more quickly.
Edit: The latter, however, is ugly and drives me nuts. :)
Related
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
I Can't understand why it is happening.
I read here that :
The first $.each constitutes a single function call to start the
iterator.
The second $(foo.vals).each makes three function calls to start the
iterator.
The first is to the $() which produces a new jQuery wrapper
set (Not sure how many other function calls are made during this
process).
Then the call to $().each.
And finally it makes the internal
call to jQuery.each to start the iterator.
In your example, the difference would be negligible to say the least.
However, in a nested use scenario, you might find performance becoming
an issue.
Finally, Cody Lindley in jQuery Enlightenment does not recommend using
$.each for iterations greater than 1000 because of the function calls
involved. Use a normal for( var i = 0... loop.
So I tested it with this jsperf :
(task : find Tr's who has checked checkbox inside of them , and color that tr.)
This is the jsbin
But look at jsperf
against all expectations , the opposite is the true. ( chrome and FF and IE)
The one who uses $().each ( which calls three methods is the fastest
and etc..
What is going on here?
Your test is too heavy to really determine the actual difference between the three looping options.
If you want to test looping, then you need to do your best to remove as much non-related work from the test as possible.
As it stands, your test includes:
DOM selection
DOM traversal
element mutation
All of those are quite expensive operations compared to the loops themselves. When removing the extra stuff, the difference between the loops is much more visible.
http://jsperf.com/asdasda223/4
In both Firefox and Chrome, the for loop is well over 100x faster than the others.
Well
$.each() is a jQuery function being executed which will be used to iterate over your list, so the overhead should be the jQuery function as well as the overhead of calling for that function for every item in the list. In this case
$(thing).each() The idea behind this is that the $(thing) makes an jQuery instance and then you iterate over this instance (.each acts on that instance). In your case, because the instance you called it with is already a jQuery object, the overhead is minimal (are you an instance, oh yes you are).
for() In this case there is no overhead at all, except looking up the length of the list on each iteration.
Consider doing:
var l = g.length;
for (var i=0;i<l;i++) {
// code;
}
Depending on your HTML most of the time could very well be in the Sizzle Jquery parser finding your selector in the document.
Also note, I don't think your selector is the best, unless things have changed significantly recently jQuery selectors are evaluated right to left, consider limiting the scope of the selector by doing a .find() on everything beneath the first tag referenced by id as it will then be searching only a subset of the document.
Different approach to your "task" http://jsfiddle.net/ADMnj/17/
But i guess the performance issues are coming from that you dont state selector the in the right matter
#t tr input:checkbox:checked
VS
#t tr :checkbox:checked
Tough the right way to check if a checkbox is checked would be to call it like this
#t tr input[checked="checked"]
W3.ORG's selector list see the E[foo]
And last but not least the fastest is also the one with the shortest code which might be the slight performance different you get from 3 lines vs 4 and 5 lines of code but cant prove this fact
Is it faster to write separate calls to the jQuery function, or to use a single chain? If an added explanation of why one is faster than the other, it will be greatly appreciated :-)
An example:
$('#blah_id').niftyjQueryMethod1().niftyjQueryMethod2();
is faster/slower than
$('#blah_id').niftyjQueryMethod1();
$('#blah_id').niftyjQueryMethod2();
In your example, chaining is faster.
// Example 1
$('#blah_id').niftyjQueryMethod1().niftyjQueryMethod2();
// Example 2
$('#blah_id').niftyjQueryMethod1();
$('#blah_id').niftyjQueryMethod2();
In example 1, the call to create the jQuery object ($('#blah_id')) is only made once. In example 2, it is made twice. This means the second call will be slower.
If you don't want to put them all in a chain, you can cache the selection in a variable:
var blah = $('#blah_id');
blah.niftyjQueryMethod1();
blah.niftyjQueryMethod2();
Presuming that the methods don't affect what elements are present in the selection selection (like, for example, parent, find, or filter do) this will be pretty much exactly the same as example 1.
This:
$('#blah_id').niftyjQueryMethod1().niftyjQueryMethod2();
Probably is faster than this:
$('#blah_id').niftyjQueryMethod1(); $('#blah_id').niftyjQueryMethod2();
But not because of the chaining. It's because there's a cost to the selector lookup.
This:
var $blah = $('#blah_id');
$blah.niftyjQueryMethod1();
$blah.niftyjQueryMethod2();
probably has no appreciable difference in speed from the first example.
First one is faster. In the second selector is creating a jQuery object twice. If $('#blah_id') was cached and stored as a variable var $blah = $('#blah_id') and then using in your second selector as $blah and not $('#blah_id') then it would make no real difference.
Yeah, chaining is faster, because the found DOMElement (via the $('#blah_id')) is only directly passed form function to function.
If you seperate them, the DOMElement has to be found again and again and again. Every $("selector") is evil. Try to avoid them as often as possible.
You can even set references to the previosuly found objets:
var myElement = $('#blah_id');
myElement.doSomething();
I had that question in mind for a long time.
Theoretically, jQuery core function accepts an optional value that can be a DOM element - $(".searched",$("#context")[0]) - or a jQuery object - $(".searched",$("#context") ) .
I discovered that last question reading that fine article.
But i really cant see the difference between use a context and pass a more complex css expression. If there is no difference in the way it works, is there any perfomance difference?
Thanks
It gets converted to a DOM element (in Sizzle, the context portion) to search in either of your cases, ultimately doing a .find() under the covers.
If you're concerned about performance (why not be as fast as possible?), you should use this instead:
$("#context .searched")
This version gets converted back into a jQuery object:
$("#context")[0]
So it's a bit slower on the jquery side, since it has to be wrapped in a jquery object before the .find() call, that performance difference is very minimal, but it's the only difference so I'm noting it :)
The major difference would be that $(".searched", context); can take a variable as a context as well. It is effectively doing $(context).find('.searched'); under the hood, and I think the second version is more readable anyway, so I usually use that.
The use for this situation would be something like this:
$.fn.highlightSearch = function() {
return this.each(function() {
$('.searched', this).addClass('highlighted');
// the commented line performs the same thing:
// $(this).find('.searched').addClass('highlighted');
});
};
$('#context').highlightSearch();
$('.somethingElse').highlightSearch();
Notice that in this case, you can't simply append the new selector on the end of the original.
If you have no other reason to hold a copy of $('#context'), then using $('#context .searched') is going to be quicker, and simpler. However, if you already had $('#context') stored in a variable, its better to use .find(select) or the $(selector, context) form to search for your contained elements.
Readability: a CSS selector like $("#context .searched") is far more readable than the other.