I'm working with some code that adds and removes various CSS classes on the same object. The code looks something like:
function switch_states(object_to_change) {
if(object_to_change.hasClass('ready') {
object_to_change.removeClass('ready');
object_to_change.addClass('not_ready');
} else {
object_to_change.removeClass('not_ready');
object_to_change.addClass('ready');
}
}
I suspect I might be able get away with chaining these two snippits into something like object_to_change.removeClass('ready').addClass('not_ready'); But I have to wonder: besides legibility and the neato factor, does this give me any performance gains?
My question: Would a chained objects do their work any faster than two non-chained ones? If so, why? If not, at what point does using chained objects become more efficient than disparate ones -- i.e. under what conditions do chained jQuery objects offer performance gains?
Would a chained objects do their work any faster than two non-chained ones? If so, why?
Chained objects are faster when they start with a selector, and chaining prevents the selector being run against the DOM multiple times (a comparatively expensive operation).
Slower:
$(".myClass").val(123);
$(".myClass").addClass("newClass");
Faster:
$(".myClass")
.val(123)
.addClass("newClass");
Caching the selector in a variable has the same benefits as chaining (i.e. preventing expensive DOM lookups):
var $selctor = $(".myClass");
$selector.val(123);
$selector.addClass("newClass");
In your example, the answer is no. However what chaining does is give you the ability to not declare variables in places where you can just use the chain (and current stack of elements) to perform various tasks.
I would recommend using chaining with newlines - this has become somewhat of a jQuery convention.
Well, you can't really chain an if() statement, but jQuery has a toggleClass() method that would seem appropriate here:
object_to_change.toggleClass('ready not_ready');
This removes the need for chaining vs separate calls in the first place. I don't know how it compares in terms of performance. Would need to test.
EDIT: I should note that the solution above implies that the object already has one or the other. If not, another approach would be to pass a function to toggleClass.
object_to_remove.toggleClass(function( i, cls ) {
return cls.indexOf( 'ready' ) == -1 ? 'ready' : 'ready not_ready';
});
No performance gains here: it's the same object. Just the neato factor, which should not be underestimated.
The most obvious performance gain is in development and maintenance. If the code is clean, readable, and intent is apparent -- developer time should be reduced.
While this needs to be balanced against code performance, if there is a tested, measurable difference, then optimize for speed. If at all possible, without reducing the readability of the code.
If readability/maintainability of the code will be impacted, look to see if the optimizations can be automated as part of the build process, to keep the maintainable code for further development.
Related
If I write by myself the code behind the layer of abstraction of a native method, instead of using that method (which will behind scenes do the same that I write manually), have a good impact on the app performance or speed?
There are (at least) three significant advantages of using a built-in function instead of implementing it yourself:
(1) Speed - built-in functions generally invoke lower-level code provided by the browser (not in Javascript), and said code often runs significantly faster than the same code would in Javascript. Polyfills are slower than native code.
(2) Readability - if another reader of your code sees ['foo', 'bar'].join(' '), they'll immediately know what it does, and how the join method works. On the other hand, if they see something like doJoin(['foo', 'bar'], ' '), where doJoin is your own implementation of the same method, they'll have to look up the doJoin method to be sure of what's happening there.
(3) Accuracy - what if you make a mistake while writing your implementation, but the mistake is not immediately obvious? That could be a problem. In contrast, the built-in methods almost never have bugs (and, when spotted, usually get fixed).
One could also argue that there's no point spending effort on a solved problem.
Yes, there is a difference in efficiency in some cases. There are some examples in the docs for the library fast.js. To summarize what they say, you don't have to handle all of the cases laid out in the spec so sometimes you can do some things faster that the built in implementations.
I wouldn't take this as a license to make your code harder to read/maintain/reuse based on premature optimization, but yes you may gain some speed with your own implementation of a native method depending on your use case.
Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).
What's the difference between these two and which should I use?
$.data(this, 'timer');
vs
$(this).data('timer');
There is no meaningful difference in result. The difference is really just a matter of coding style.
The first is a more procedural approach where you call a globally namespaced function and pass it a couple arguments.
The second is more consistent with the object oriented style of jQuery where you create a jQuery object containing one or more DOM elements and then you call methods on it to affect those elements or get info from those elements.
If you were interested in the fine details of performance between the two methods, then you'd have to devise a performance test and measure in several browsers. But, if you really wanted to optimize performance of a given operation, you would probably cut the jQuery out of the code completely since it is rarely the fastest way to do something. It saves lots of coding time and offers great cross browser support (which both make it well worth it for most code), but often at the cost of some performance.
There is no diference, the second one is better to use but just for intellisense
while (div.hasChildNodes()) {
fragment.appendChild(div.firstChild)
}
while (div.firstChild) {
fragment.appendChild(div.firstChild)
}
Comparing the two pieces of pseudo code above, they both append each child of div to fragment until there are no more children.
When would you favour hasChildNodes or firstChild they seem identical.
If the APIs are so similar then why do they both exist. Why does hasChildNodes() exist when I can just coerce firstChild from null to false
a) micro-optimization!
b) Although it seems to be common practice, I'm not fond of relying on null/non-null values being used as a substitute for false/true. It's a space saver that's unnecessary with server compression enabled (and can sometimes call subtle bugs). Opt for clarity every time, unless it's demonstrated to be causing bottlenecks.
I'd go for the first option, because it explains your intent perfectly.
There's no functional difference between your two implementations. Both generate the same results in all circumstances.
If you wanted to pursue whether there was a performance difference between the two, you would have to run performance tests on the desired target browsers to see if there was a meaningful difference.
Here's a performance test of the two. It appears to be browser-specific for which one is faster. hasChildNodes() is significantly faster in Chrome, but firstChild is a little bit faster in IE9 and Firefox.
I would suggest hasChildNodes() over firstChild just due to the return; hasChildNodes() returns false if it's blank, while firstChild returns null; which allows you to parse the data more efficiently when it comes to checking if they exist. Really, it depends on how you plan on manipulating the DOM to achieve your results.
Your two examples will usually* behave identically, aside from some (probably) negligible differences in speed.
The only reason to favor one over the other would be clarity, specifically how well each construct represents the ideas the rest of your code is implementing, modulated by whether one of them is easier for you to visually parse when you're reading the code. I suggest favoring clarity over performance or unnecessarily pedantic correctness whenever possible.
* Note: I'm assuming you're running this code on a webpage in a vaguely modern browser. If your code needs to run in other contexts, there might be a significant speed difference or even functional (side-effect) differences between .hasChildNodes() and .firstChild: One requires a function call to be set up and executed, the other requires that a script-ready DOM compliant representation of the first child be constructed if it isn't already available.
Assuming DOM calls are slow, you could optimize further (at the expense of readability) by using a single call to firstChild:
var child;
while ( (child = div.firstChild) ) {
fragment.appendChild(child);
}
... although (bizarrely, to me) this apparently doesn't help at all and makes things worse, according to this jsPerf: http://jsperf.com/haschildnodes-vs-firstchild/2
As to why both methods exist, remember that DOM is not exclusive to JavaScript. In a Java DOM implementation, for example, you would have to explicitly compare firstChild (or more likely firstChild()) to null.
I've been using jQuery a long time and I've been writing a slideshow plugin for my work and I (not 100% consciously) wrote probably 75% it in a single chain. It's fully commented and i specify each end() and what it's resetting it to, etc, but does this slow down jQuery or the DOM loading, or, does this actually speed it up?
It depends on your specific code, as always. As for storing a reference vs .end(), well...with a really long chain, it's faster not to chain vs .end() calls, simply because you have to handle the extra baggage (storing/restoring), like the .prevObject reference, the .selector, .context, etc that you probably don't care about in many cases....and just more intertwined references to previous objects.
Where it's more costly is harder to measure...it's not the execution (though that is slower, even if infinitesimally)...it's the more complicated garbage collection to clean up all those objects later, since the dependency graph is now much larger.
Now...will it make a measurable difference? not unless your chain is really long, in which case it's probably a micro-optimization you need not worry about in most cases.
99% of the time, unless you're making some egregious performance penalizing call, don't worry about it, as with most micro-optimizations. If you're having a problem with performance, then get into it.
One of the most expensive things you can do in a modern browser is to access and manipulate the DOM. Chaining lets you minimize the actual lookups that you have to do, which can mean significantly faster code. The other option is to do the initial lookup, store that in a variable, and do everything off of that variable. That being said, jquery was specifically designed with that chaining api in mind, so it is more idiomatic to chain.
I think chainability of jQuery is a great feature ... one should really use it more often.
for example:
$(this)
.find('.funky')
.css('width', 30)
.attr('title', 'Funky Title')
.end()
.fadeIn();
is much better (and elegant) - don't have to create 2 jQuery $(this) objects than :
$(this).find('.funky').css('width', 30).attr('title', 'Funky Title');
$(this).fadeIn();
My guess would be no difference, or faster, due to lack of intermediaries.
The only major drawback is to clarity, if you think via comments that it is obvious without making it multi-line with intermediate variables, via virtue of comments or just a nicely clean call chain then fine.