I was wondering if there is any difference between .filter(':last') and .last()?
For me it looks like they're doing the same, but I'm new to jQuery. If there is no difference in the result, which one is recommended or is it just a matter of personal preference?
last works by saying "give me the last element from the selection". It takes just two function calls and four lines of code to do so. It can't be done in a quicker way.
filter(':last'), however, is much more complex. It is a much more flexible system, allowing multiple elements to be returned if that's what you want, or multiple conditions, or a mixture of both. It is much less efficient, because it has to work out what you want. For instance, parsing ':last' takes a little time, whereas with the last function it's a simple property lookup.
last is by far the more efficient.
:last - Selects the last matched element.
last() - Reduce the set of matched elements to the final one in the set.
As you can see, they do the same thing (in terms of the end result, anyway).
last() is slightly faster than :last (although you may not notice it, it's always good to know).
.filter(":last"), although making the best (performance-wise) out of :last, still involves more function calls and is still slower than last() - although it does have its advantages (see #lonesomeday's answer for those).
My recommendation however would be to generally use last() as opposed to the former.
Related
It says method chaining in D3.js performs several actions in single line of code. But i am not sure how much it cares about performance while executing.
For example,
By method chaining ,we would like to put the code like below:
var data =[10,20,30,40]
wrap.selectAll("rect")
.data(data)
.enter()
.append("rect")
.attr("x",function(d, j) {return scale(j); })
.attr("y",function(d,i){ return (h-d)})
.attr("width",scale.rangeBand())
.attr("height",function(d,i){ return (d)})
.style("fill","red");
In the above code,it will generate 4 rectangles, then for each 4 rectangles we are setting the attribute "x","y","width","height".
No.of rectangles ---> 4 No.of attributes("x","y","width","height")
---> 4 No.of iteration for each attribute ---> 4 (sine 4 rectangles) No.of iteration for 4 attributes ---> 4*4=16 times
Is it really necessary of such number of iterations?Is it fast performance?
Normally we do like this,
wrap.forEach(function(d,i){
d.setAttribute("x", scale(i))
d.setAttribute("y",(h-d))
d.setAttribute("width",w)
d.setAttribute("height",h)
})
In the above method ,No.of iterations used --> 4
So whats the advantage of d3.js method chaining and selection.daa with the above mentioned conventional approach
Please clarify me??
I was thinking about this today.
I think there is a fundamental problem with chaining.
Namely, you cannot partition data into different shapes that easily. And, if you could, you can't assume similar attributes chained from different shapes. A square and a circle say, have different attributes to define their size and location.
But, assigned from this conflict, which is not resolved by symbols, there remains the question, which you have asked,
"Is it an efficient representation?"
It make the code look nice. But, in fact each one is a function call that can go down a deep stack for anything to happen. And, that's slow.
So, one begins to think of an alternative similar to your loop. Or, perhaps the attributes can be gathered and assigned in one last shot - almost a compilation.
Don't forget that JavaScript is interpreted.
It is easy to get deceived into thinking that JavaSript will provide the efficiency you are looking for in certain applications. Of course, a tired user clicking on this and that would not notice the difference. But, there is the animation and the interaction of working parts when changes cascade in some way. Certain applications really need the efficiency.
Even the forEach that you are using can be suspect. I was working with a younger programmer last year, using D3. And, there was some part of one of our displays that ran woefully slowly. (A tired used would have certainly been awoken into a tizzy.) We took it out of the forEach and ran it in a regular "for" loop construct. Then, the same code ran with incredible speed. So, there are parts of JavaScript that are not as ready for prime time as you might think.
It is probably better to use many of the new constructs that are making their way into the language for many parts of an application. But, when it counts, you might wait for some update and use more optimized parts of the language.
I am fairly sure that d3 is not optimal in setting attributes. And, now I am trying to think of some better representation than chaining.
Remember the act of iterating itself is negligible. If the cost of setting an attribute was 1 you are comparing 16 * 1 with 4 * 4. Therefore it's not really a big problem. The chaining is a matter of concision.
Using Big O notation to analyse the algorithms, both are O(n).
I always romoving 'current' class of all siblings then add 'current' class to my clicked one. I want to know will it be faster only removing 'current' class of which has 'current'.seems to be a simple question, but I really want to know.
Yes, filtering the query to a smaller set of elements will perform faster, because there are less elements to check.
In modern browsers, jQuery will use native methods to query the DOM, so adding the selector has a negligible performance impact.
I don't think there's much difference, since there's only one "current". It doesn't matters too much for querying one more element or one less.
Usually I'll first find out the outer element to narrow down
$('#selectionDiv').find() ....
Depending on how many elements you are re-classing, the impact of the optimization will of course vary.
I tested it, http://jsperf.com/reclassing-all-or-one, using 7 (seemed reasonable for for example navigation tabs) divs and I think the difference was significant (reclassing all 30% slower than only one), percentage wise. If one cares about actual time though it may not be, but I can't really see any reason not do be distinct.
I currently have code that is pulling in data via jQuery and then displaying it using the each method.
However, I was running into an issue with sorting, so I looked into using, and added, jQuery's filter method before the sort (which makes sense).
I'm now looking at removing the sort, and am wondering if I should leave the filter call as-is, or move it back into the each.
The examples in the jQuery API documentation for filter stick with styling results, not with the output of textual content (specifically, not using each()).
The documentation currently states that "[t]he supplied selector is tested against each element [...]," which makes me believe that doing a filter and an each would result in non-filtered elements being looped through twice, versus only once if the check was made solely in the each loop.
Am I correct in believing that is more efficient?
EDIT: Dummy example.
So this:
// data is XML content
data = data.filter(function (a) {
return ($(this).attr('display') == "true");
});
data.each(function () {
// do stuff here to output to the page
});
Versus this:
// data is XML content
data.each(function () {
if ($(this).attr('display') == "true") {
// do stuff here to output to the page
}
});
Exactly as you said:
The documentation currently states
that "the supplied selector is
tested against each element [...]",
which makes me believe that doing a
filter and an each would result in
non-filtered elements being looped
through twice, versus only once if the
check was made solely in the each
loop.
Through your code we can clearly see that you are using each in both cases, what is already a loop. And the filter by itself is another loop (with an if it for filtering). That is, we are comparing performance between two loops with one loop. Inevitably less loops = better performance.
I created this Fiddle and profiled with Firebug Profiling Tool. As expected, the second option with only one loop is faster. Of course with this small amount of elements the difference was only 0.062ms. But obviously the difference would increase linearly with more elements.
Since many people are super worried to say the difference is small and you should choose according to the maintainability, I feel free to express my opinion: I also agree with that. In fact I think the more maintainable code is without the filter, but it's only a matter of taste. Finally, your question was about what was more efficient and this is what was answered, although the difference is small.
You are correct that using filter and each is slower. It is faster to use just the each loop. Where possible do optimise it to use less loops.
But this is a micro optimisation. This should only be optimised when it's "free" and doesn't come at a cost of readable code. I would personally pick to use one or the other based on a style / readability preference rather then on performance.
Unless you've got a huge sets of DOM elements you won't notice the difference (and if you do then you've got bigger problems).
And if you care about this difference then you care about not using jQuery because jQuery is slow.
What you should care about is readability and maintainability.
$(selector).filter(function() {
// get elements I care about
}).each(function() {
// deal with them
});
vs
$(selector).each(function() {
// get elements I care about
if (condition) {
// deal with them
}
}
Whichever makes your code more readable and maintainable is the optimum choice. As a separate note filter is a lot more powerful if used with .map then if used with .each.
Let me also point out that optimising from two loops to one loop is optimising from O(n) to O(n). That's not something you should care about. In the past I also feel that it's "better" to put everything in one loop because you only loop once, but this really limits you in using map/reduce/filter.
Write meaningful, self-documenting code. Only optimise bottlenecks.
I would expect the performance here to be very similar, with the each being slightly faster (probably noticeable in large datasets where the filtered set is still large). Filter probably just loops over the set anyway (someone correct me if I'm wrong). So the first example loops the full set and then loops the smaller set. The 2nd just loops once.
However, if possible, the fastest way would be to include the filter in your initial selector. So lets say your current data variable is the result of calling $("div"). Instead of calling that and then filtering it, use this to begin with:
$("div[display='true']")
I generally don't worry about micro-optimizations like this since in the grand scheme of things, you'll likely have a lot more to worry about in terms of performance than jQuery .each() vs. .filter(), but to answer the question at hand, you should be able to get the best results using one .filter():
data.filter(function() {
return ($(this).attr('display')==="true");
}).appendTo($('body'));
For a primitive performance comparison between .each() and .filter(), you can check out this codepen:
http://codepen.io/thdoan/pen/LWpwwa
However, if all you're trying to do is output all nodes with display="true" to the page, then you can simply do as suggested by James Montagne (assuming the node is <element>):
$('element[display=true]').appendTo($('body'));
Is there a significant difference if I construct a jQuery object around an element once or many times? For instance:
var jEl = $(el);
$.each(myArray, function() {
jEl.addClass(this);
}
versus:
$.each(myArray, function() {
$(el).addClass(this);
}
I know there are other ways to write this that might sidestep the issue, but my question is about whether I should work to do $(el) just once, or if it truly is irrelevant. The example is contrived.
Bonus points for explaining just what $(el) does behind the scenes.
I know that theoretically more work is being done, what I don't know is whether it matters... if jQuery caches it or the browsers are all really good at the second request or whatever, than its not worth it.
FYI: The relevant jQuery API link is here (which I provide because $() isn't the easiest thing to Google for): http://api.jquery.com/jQuery/#using-dom-elements
Also worth including this useful link: http://www.artzstudio.com/2009/04/jquery-performance-rules/, where several of his points center around saving, chaining, and selecting well.
Yes, there is a performance impact.
In the first example, only one instance is created.
In the second, an instance will be created for each iteration of the loop.
Depending on the size of myArray, that could lead to a lot of extraneous instances being created which will chew through memory.
The first way will be faster. First of all you are creating a new object each time also it will depend on your browser, your page and what el is.
If el is a string (for example "#myname") then $(el) will "query" the DOM to find that element. jQuery is quite fast in doing queries but it does take some time. So doing this many times will take that many times longer.
Do I get the bonus points?
Yes there will be. Each time $() is called, jQuery does a separate search of the DOM for the element.
If each search takes 0.1 seconds (usually much much faster, but it's an easy number to work with), and you've got 1000 elements in your array, that's 100 seconds devoted just to traversing the DOM in the second example, as opposed to just 0.1 seconds in the first.
Looking at improving the performance of my jquery selectors. so any tips or articles as the best per formant jquery selectors? For example selecting the a div's id. Anywhere online I can provide html and compare the different selectors I can use to select the required element.
You can compare selector performance here: http://jsperf.com/
Just setup your HTML, include jQuery and place each selector you want to compare as a test case.
Many of the rules here still apply, however the game changed a bit in jQuery 1.4.3+, after that Sizzle (jQuery's selector engine) will use querySelectorAll() in browsers that support it.
This article goes into some detail about jQuery selectors and their performance. It's mainly about using jQuery the right way. Since a lot of jQuery use revolves around selectors, the article spends some time on them.
Basically a few things to remember:
If you're looking for performance, use selectors that delegate to native DOM-inspection methods (getElementById, getElementsByTagName)
Cache results
Pseudo-selectors can cause a performance hit.
One thing these articles don't address is what your starting point is. If you are starting with the entire DOM tree, then these articles are actually useful.
However, if you have an element to start with, it then depends on what your search is. Most of my dynamic javascript with MVC templates tends to grab the element on which the action is taken, then do a search for parent objects. This eliminates the need to uniquely name a container when they are randomly generated-- makes things a lot easier from a dynamic devlopment standpoint.
While searching for a near-parent node may not be as fast as searching for an ID, the performance should be negligible compared to the amount of time and/or performance of generating and tracking a number of unique IDs.
As with everything in development, "it depends" will reign here.
I notice a lot of these type of questions are restricted to performance comparison between different jQuery selectors.
I recently came across an article that compares jQuery selectors against their native Javascript counterparts.
It might sound like a lot of hassle, but the performance gain is quite substantial. More than I imagined actually.
Article:
http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/