Is using $("#vacations").find("li").last() is a better practice than $("#vacations li:last")?
Background and my thoughts:
I was playing with a nice interactive try jQuery tutorial and one of the tasks says:
As you are looking through your code, you notice that someone else is selecting the last vacation with: $("#vacations li:last"). You look at this and you think, "Traversal would make this way faster!" You should act on those thoughts, refactor this code to find the last li within #vacations using traversal instead.
Why would I think so? For me usage of selectors looks a bit higher level than traversing. In my mind when I am specifying a selector it is up to jQuery how to better get the single result I need (without need in returning interim results).
What is that extra overhead of using composite selectors? Is it because current implementation of selectors logic just parses the string and uses the traversal API? Is parsing a string that slow? Is there a chance that a future implementation will use that fact that it does not need to return interim results and will be faster than traversal?
There's no cut and dry answer to this, but with respect to the :last selector you're using, it's a proprietary extension to the Selectors API standard. Because of this, it isn't valid to use with the native .querySelectorAll method.
What Sizzle does is basically try to use your selector with .querySelectorAll, and if it throws an Exception due to an invalid selector, it'll default to a purely JavaScript based DOM selection/filtering.
This means including selectors like :last will cause you to not get the speed boost of DOM selection with native code.
Furthermore, there are optimizations included so that when your selector is very simple, like just an ID or an element name, the native getElementById and getElementsByTagName will be used, which are extremely fast; usually even faster than querySelectorAll.
And since the .last() method just grabs the last item in the collection instead of filtering all the items, which is what Sizzle filters normally do (at least they used to), that also will give a boost.
IMO, keep away from the proprietary stuff. Now that .querySelectorAll is pretty much ubiquitous, there are real advantages to only using standards-compliant selectors. Do any further filtering post DOM selection.
In the case of $("#vacations").find("li"), don't worry about the interim results. This will use getElementById followed by getElementsByTagName, and will be extremely fast.
If you're really super concerned about speed, reduce your usage of jQuery, and use the DOM directly.
You'll currently find notes in the docs for selectors like :last, that warn you about the performance loss:
Because :last is a jQuery extension and not part of the CSS specification, queries using :last cannot take advantage of the performance boost provided by the native DOM querySelectorAll() method. To achieve the best performance when using :last to select elements, first select the elements using a pure CSS selector, then use .filter(":last").
But I'd disagree that .filter(":last") would be a good substitute. Much better would be methods like .last() that will target the element directly instead of filtering the set. I have a feeling that they just want people to keep using their non-standards-compliant selectors. IMO, you're better of just forgetting about them.
Here's a test for your setup: http://jsperf.com/andrey-s-jquery-traversal
Sizzle, jQuery's selector engine, parses the string with regex and tries to speed up very basic selectors by using getElementById and getElementsByTagName. If your selector is anything more complicated than #foo and img, it'll try to use querySelectorAll, which accepts only valid CSS selectors (no :radio, :eq, :checkbox or other jQuery-specific pseudo-selectors).
The selector string is both less readable and slower, so there's really no reason to use it.
By breaking the selector string up into simple chunks that Sizzle can parse quickly (#id and tagname), you're basically just chaining together calls to getElementById and getElementsByTagName, which is about as fast as you can get.
Related
I have a jQuery selector:
$('#myId span')
Is that really a performance dog vs:
$('#myId').find('span')
The first is obviously a bit cleaner to write and I'd like to stick with that if possible.
Test: http://jsperf.com/descend-from-id-vs-select-and-find/3
$('#myId span') will cause jQuery to parse the string using its Sizzle selector engine, reading it from right-to-left, beginning its search with span.
$('#myId').find('span') will cause jQuery to select #myId immediately (bypassing the step to parse with Sizzle), and then traverse down the DOM, multiple levels, to find all descendants.
So the latter is faster.
You could also try $('#myId').children('span'), which might be even faster in some cases, since it will only descend a single level to find children only (as opposed to find, which keeps going).
So in my app, the user can create some content inside certain div tags, and each content, or as I call them "elements" has its own object. Currently I use a function to calculate the original div tag that the element has been placed inside using jquery selectors, but I was wondering in terms of performance, wouldn't it be better to just store a reference to the div tag once the element has been created, instead of calculating it later ?
so right now I use something like this :
$('.div[value='+divID+']')
but instead I can just store the reference inside the element, when im creating the element. Would that be better for performance ?
If you have lots of these bindings it would be a good idea to store references to them. As mentioned in the comments, variable lookups are much much faster than looking things up in the DOM - especially with your current approach. jQuery selectors are slower than the pure DOM alternatives, and that particular selector will be very slow.
Here is a test based on the one by epascarello showing the difference between jQuery, DOM2 methods, and references: http://jsperf.com/test-reference-vs-lookup/2. The variable assignment is super fast as expected. Also, the DOM methods beat jQuery by an equally large margin. Note, that this is with Yahoo's home page as an example.
Another consideration is the size and complexity of the DOM. As this increases, the reference caching method becomes more favourable still.
A local variable will be super fast compared to looking it up each time. Test to prove it.
jQuery is a function that builds and returns an object. That part isn't super expensive but actual DOM lookups do involve a fair bit of work. Overhead isn't that high for a simple query that matches an existing DOM method like getElementById or getElementsByClassName (doesn't in exist in IE8 so it's really slow there) but yes the difference is between work (building an object that wraps a DOM access method) and almost no work (referencing an existing object). Always cache your selector results if you plan on reusing them.
Also, the xpath stuff that you're using can be really expensive in some browsers so yes, I would definitely cache that.
Stuff to watch out for:
Long series of JQ params without IDs
Selector with only a class in IE8 or less (add the tag name e.g. 'div.someClass') for a drastic improvement - IE8 and below has to hit every piece of HTML at the interpreter level rather than using a speedy native method when you only use the class
xpath-style queries (a lot of newer browsers probably handle these okay)
When writing selectors consider how much markup has to be looked at to get to it. If you know you only want divs of a certain class inside a certain ID, do one of these $('#theID div.someClass') rather than just $('div.someClass');
But regardless, just on the principle of work avoidance, cache the value if you're going to use it twice or more. And avoid haranguing the DOM with repeated requests as much as you can.
looking up an element by ID is super fast. i am not 100% sure i understand your other approach, but i doubt it would be any better than a simple lookup of an element by its id, browsers know how to this task best. from what you've explained I can't see how your approach would be any faster.
Which is faster and why? Selecting div (for plugin needs) by $('div[data-something]') or $('div.something')? I lean towards the former since it's "cleaner".
Based on this SO question I know I shouldn't be using both. However I didn't find out whether there is a difference between these.
In Chrome 16 at least, there is no difference. However, if you make the class selector less specific ($(".test") for example), it does outperform the other methods:
That was somewhat unexpected, because as ShankarSangoli mentions, I thought the div.test class selector would be faster.
It will vary by browser. Nearly all browsers now support querySelectorAll, and jQuery will use it when it can. querySelectorAll can be used with attribute presence selectors, so if it's there jQuery doesn't have to do the work, it can offload it to the engine.
For older browsers without querySelectorAll, jQuery will obviously have to do more work, but even IE8 has it.
As with most of these things, your best bet is:
Don't worry about it until/unless you see a problem, and
If you see a problem, profile it on the browsers you intend to support and then make an informed decision.
Selecting by class is always faster than attribute selector because jQuery tries to use the native getElementByCalssName first if supported by browsers. If not it uses the querySelector which uses css selectors to find the elements within the page.
What options do I have to access the elements of a DOM tree in a For Loop ? And if it's too difficult can I convert it to an array ?
thanks,
Bruno
Here's an example on jsfiddle.
If you have any questions, don't hesitate to ask. XML has a magnificent traversal system, this doesn't even begin to cut into the raw power of the DOM.
Also, be sure to check w3schools, although it's not a perfectly reliable source.
The DOM tree allows you to navigate down the levels using .children or .childNodes().
.children() provides an array of DOM elements below the current one, and .childNodes() provides all nodes, including text nodes.
You can also use getElementById() to get a specific node (much quicker than any array search could ever be), and getElementsByTagName() to get all elements of a particular type.
I definitely wouldn't recommend converting it to an array -- the DOM tree as it stands is much more flexible than any array.
If you need more flexibility, you could try JQuery, which gives you even more flexibility for searching the DOM by adding complex CSS-style selector queries to the mix. (modern browsers also provide this natively with the getElementsBySelector() method, but this isn't available in all browsers, so you're better off using JQuery or similar for this for the time being)
When using $("#xxx") I guess under the hoods jQuery uses getElementById.
What about $(".xxx") does it scan the whole DOM every time?
jQuery attempts to use the fastest selection method to get what you asked for. There are a number of good resources with performance optimization tips out there that relate directly to jQuery:
Good ways to improve jQuery selector performance?
http://www.artzstudio.com/2009/04/jquery-performance-rules/
http://www.componenthouse.com/article-19
http://www.learningjquery.com/2006/12/quick-tip-optimizing-dom-traversal
See the context argument to the $ function. If not supplied, it defaults to the entire document.
So to answer your question:
$('whatever'); // scans the entire `document`
$('whatever', element); // scans only within element
What about $(".xxx") does it scan the whole DOM every time?
If you don't do the caching: yes. Caching is simple enough:
var $myCachedElements = $('.myElements'); // DOM querying occurs
$myCachedElements.animate({left: '1000px'}, 'slow'); // no DOM Querying this time, as long as you use the variable.
Many browsers do not support getElementsByClassName as a native DOM function, so jQuery has to do the work itself by checking each element's classes.
Here's a compatibility table for document.getElementsByClassName: http://www.quirksmode.org/dom/w3c_core.html#gettingelements
The browsers in green for getElementsByClassName will not require a full DOM scan for $(".className") selectors, and will use browser-native methods instead. The ones in red will be slower.
The difference isn't as pronounced as you'd think though, even for thousands of elements.