I've been using jQuery a long time and I've been writing a slideshow plugin for my work and I (not 100% consciously) wrote probably 75% it in a single chain. It's fully commented and i specify each end() and what it's resetting it to, etc, but does this slow down jQuery or the DOM loading, or, does this actually speed it up?
It depends on your specific code, as always. As for storing a reference vs .end(), well...with a really long chain, it's faster not to chain vs .end() calls, simply because you have to handle the extra baggage (storing/restoring), like the .prevObject reference, the .selector, .context, etc that you probably don't care about in many cases....and just more intertwined references to previous objects.
Where it's more costly is harder to measure...it's not the execution (though that is slower, even if infinitesimally)...it's the more complicated garbage collection to clean up all those objects later, since the dependency graph is now much larger.
Now...will it make a measurable difference? not unless your chain is really long, in which case it's probably a micro-optimization you need not worry about in most cases.
99% of the time, unless you're making some egregious performance penalizing call, don't worry about it, as with most micro-optimizations. If you're having a problem with performance, then get into it.
One of the most expensive things you can do in a modern browser is to access and manipulate the DOM. Chaining lets you minimize the actual lookups that you have to do, which can mean significantly faster code. The other option is to do the initial lookup, store that in a variable, and do everything off of that variable. That being said, jquery was specifically designed with that chaining api in mind, so it is more idiomatic to chain.
I think chainability of jQuery is a great feature ... one should really use it more often.
for example:
$(this)
.find('.funky')
.css('width', 30)
.attr('title', 'Funky Title')
.end()
.fadeIn();
is much better (and elegant) - don't have to create 2 jQuery $(this) objects than :
$(this).find('.funky').css('width', 30).attr('title', 'Funky Title');
$(this).fadeIn();
My guess would be no difference, or faster, due to lack of intermediaries.
The only major drawback is to clarity, if you think via comments that it is obvious without making it multi-line with intermediate variables, via virtue of comments or just a nicely clean call chain then fine.
Related
Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).
What's the difference between these two and which should I use?
$.data(this, 'timer');
vs
$(this).data('timer');
There is no meaningful difference in result. The difference is really just a matter of coding style.
The first is a more procedural approach where you call a globally namespaced function and pass it a couple arguments.
The second is more consistent with the object oriented style of jQuery where you create a jQuery object containing one or more DOM elements and then you call methods on it to affect those elements or get info from those elements.
If you were interested in the fine details of performance between the two methods, then you'd have to devise a performance test and measure in several browsers. But, if you really wanted to optimize performance of a given operation, you would probably cut the jQuery out of the code completely since it is rarely the fastest way to do something. It saves lots of coding time and offers great cross browser support (which both make it well worth it for most code), but often at the cost of some performance.
There is no diference, the second one is better to use but just for intellisense
I'm building one of my first web apps using HTML5, specifically targeting iPhones.
Since I'm pretty new to this, I'm trying to develop some good coding habits, follow best practices, optimize performance, and minimize the load on the resource-constrained iPhone.
One of the things I need to do frequently... I have numerous divs (each of which has a unique id) that I'm frequently updating (e.g., with innerHTML), or modifying (e.g., style attributes with webkit transitions and transforms).
In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div I access in "global" variables at the start?
(I use "global" in quotes because I've really just got one truly global variable - it's an object that stores all my "global" variables as properties).
I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is.
Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that.
So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?
Many thanks in advance!
"In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div"
When you're calling getElementById, you're asking it to perform a task. If you don't expect a different result when calling the same method with the same argument, then it would seem to make sense to cache the result.
"I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is."
In modern browsers especially, it's very fast, but not as fast as looking up a property on your global object.
"Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that."
Trivial. It's just a pointer to an object that already exists. If you remove the element from the DOM with no intention to use it again, then of course you'll want to release your hold on it.
"So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?"
Depends entirely on the situation. If you're only fetching it a couple times, then you may not find it worthwhile to add to your global object. The more you need to fetch the same element, the more sense it makes to cache it.
Here's a jsPerf test to compare. Of course size of your DOM as well as length of variable scope traversal and the number/depth of properties in your global object will play some role.
Using a local variable or even an object property is much faster than getElementById(). However, both are so fast that their performance is generally irrelevant compared to any other operation you might do once you have the element. Event setting a single property on the element is orders of magnitude slower than retrieving it by either method.
So the main reason to cache an element is to avoid the rather long-winded document.getElementById(... syntax, or to avoid having element ID strings scattered all over your code.
Is there any benefit to performance when I do the following in Mootools (or any framework, really)?:
var elem = $('#elemId');
elem.addClass('someClass');
elem.set('some attribute', 'some value');
etc, etc. Basically, I'm updating certain elements a lot on the DOM and I was wondering if creating a variable in memory and using that when needed was better than:
$('#elemId').addClass('someClass');
$('#elemId').set('some attribute', 'some value');
The changes to $('#elemId') are all over the place, in various different functions.
Spencer ,
This is called caching and it is one of the best practices.
when you say
$('#elemId');
It will go and query the DOM everytime , so if you say
var elem = $('#elemId');
elem acts as a cache element and improves performance a lot.
This is manly useful in IE as it has memory leaks promblem and all
ready this document which is really good
http://net.tutsplus.com/tutorials/javascript-ajax/14-helpful-jquery-tricks-notes-and-best-practices/
It depends how you query the dom. Lookups by ID are extremely fast. Second most is css classes. So as long as you're doing it by only a single ID (not a complex selector containing an id), there shouldn't be much of a benefit. However, if you're using any other selector, caching is the way to go.
http://code.google.com/speed/page-speed/docs/rendering.html#UseEfficientCSSSelectors
https://developer.mozilla.org/en/Writing_Efficient_CSS
You first approach is faster then your second approach, because you "cache" the search on #elemId.
Meaning the calls to addClass and set don't require extra lookups in the DOM for your element.
However! You can link function calls:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
Depending on your application caching or linking might work better, but definitely not identical sequential lookups in the same block.
Depending on the situation, caching can be as much as 99% faster then using a jQuery object every time. In the case you presented it will not make much difference. if you plan to use the selector many times, you should definitely cache the object as a variable so it doesn't get created everytime you run it.
A similar questions was answered at Does using $this instead of $(this) provide a performance enhancement?.
Check performance log http://jsperf.com/jquery-this-vs-this
You are considering using a local variable to cache a value of a potentially slow lookup.
How slow is the call itself? If it's fast, caching won't make much a difference. If it's slow, then cache at the first call. Selectors vary significantly in their cost-- just think about how the code must fine the element. If it's an ID, then the browser provides fast access, whereas classes and nodes my require full DOM scans. Check out profiling of jQuery (Sizzle) selectors to get a sense of these.
Can you chain the calls? Consider "chaining" method calls where possible. This provides the efficiency without introducing another variable.
For your example, I'd write:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
How does the code read? Usually if the same method is going to be called multiple times, it is clearer to DRY it up, and use a local variable. The reader then understands the intent better-- you don't force them to scan all the jQuery calls to verify that they are the same. BTW, a fairly standard convention is to name jQuery variables starting with a $-- which is legal in Javascript-- as in
var $elem = $('#elem');
$elem.addClass('someClass');
Hope this helps.
I'm working with some code that adds and removes various CSS classes on the same object. The code looks something like:
function switch_states(object_to_change) {
if(object_to_change.hasClass('ready') {
object_to_change.removeClass('ready');
object_to_change.addClass('not_ready');
} else {
object_to_change.removeClass('not_ready');
object_to_change.addClass('ready');
}
}
I suspect I might be able get away with chaining these two snippits into something like object_to_change.removeClass('ready').addClass('not_ready'); But I have to wonder: besides legibility and the neato factor, does this give me any performance gains?
My question: Would a chained objects do their work any faster than two non-chained ones? If so, why? If not, at what point does using chained objects become more efficient than disparate ones -- i.e. under what conditions do chained jQuery objects offer performance gains?
Would a chained objects do their work any faster than two non-chained ones? If so, why?
Chained objects are faster when they start with a selector, and chaining prevents the selector being run against the DOM multiple times (a comparatively expensive operation).
Slower:
$(".myClass").val(123);
$(".myClass").addClass("newClass");
Faster:
$(".myClass")
.val(123)
.addClass("newClass");
Caching the selector in a variable has the same benefits as chaining (i.e. preventing expensive DOM lookups):
var $selctor = $(".myClass");
$selector.val(123);
$selector.addClass("newClass");
In your example, the answer is no. However what chaining does is give you the ability to not declare variables in places where you can just use the chain (and current stack of elements) to perform various tasks.
I would recommend using chaining with newlines - this has become somewhat of a jQuery convention.
Well, you can't really chain an if() statement, but jQuery has a toggleClass() method that would seem appropriate here:
object_to_change.toggleClass('ready not_ready');
This removes the need for chaining vs separate calls in the first place. I don't know how it compares in terms of performance. Would need to test.
EDIT: I should note that the solution above implies that the object already has one or the other. If not, another approach would be to pass a function to toggleClass.
object_to_remove.toggleClass(function( i, cls ) {
return cls.indexOf( 'ready' ) == -1 ? 'ready' : 'ready not_ready';
});
No performance gains here: it's the same object. Just the neato factor, which should not be underestimated.
The most obvious performance gain is in development and maintenance. If the code is clean, readable, and intent is apparent -- developer time should be reduced.
While this needs to be balanced against code performance, if there is a tested, measurable difference, then optimize for speed. If at all possible, without reducing the readability of the code.
If readability/maintainability of the code will be impacted, look to see if the optimizations can be automated as part of the build process, to keep the maintainable code for further development.