jquery: when does $("???") scan the whole DOM? - javascript

When using $("#xxx") I guess under the hoods jQuery uses getElementById.
What about $(".xxx") does it scan the whole DOM every time?

jQuery attempts to use the fastest selection method to get what you asked for. There are a number of good resources with performance optimization tips out there that relate directly to jQuery:
Good ways to improve jQuery selector performance?
http://www.artzstudio.com/2009/04/jquery-performance-rules/
http://www.componenthouse.com/article-19
http://www.learningjquery.com/2006/12/quick-tip-optimizing-dom-traversal

See the context argument to the $ function. If not supplied, it defaults to the entire document.
So to answer your question:
$('whatever'); // scans the entire `document`
$('whatever', element); // scans only within element

What about $(".xxx") does it scan the whole DOM every time?
If you don't do the caching: yes. Caching is simple enough:
var $myCachedElements = $('.myElements'); // DOM querying occurs
$myCachedElements.animate({left: '1000px'}, 'slow'); // no DOM Querying this time, as long as you use the variable.

Many browsers do not support getElementsByClassName as a native DOM function, so jQuery has to do the work itself by checking each element's classes.

Here's a compatibility table for document.getElementsByClassName: http://www.quirksmode.org/dom/w3c_core.html#gettingelements
The browsers in green for getElementsByClassName will not require a full DOM scan for $(".className") selectors, and will use browser-native methods instead. The ones in red will be slower.
The difference isn't as pronounced as you'd think though, even for thousands of elements.

Related

Is jQuery traversal preferred over selectors?

Is using $("#vacations").find("li").last() is a better practice than $("#vacations li:last")?
Background and my thoughts:
I was playing with a nice interactive try jQuery tutorial and one of the tasks says:
As you are looking through your code, you notice that someone else is selecting the last vacation with: $("#vacations li:last"). You look at this and you think, "Traversal would make this way faster!" You should act on those thoughts, refactor this code to find the last li within #vacations using traversal instead.
Why would I think so? For me usage of selectors looks a bit higher level than traversing. In my mind when I am specifying a selector it is up to jQuery how to better get the single result I need (without need in returning interim results).
What is that extra overhead of using composite selectors? Is it because current implementation of selectors logic just parses the string and uses the traversal API? Is parsing a string that slow? Is there a chance that a future implementation will use that fact that it does not need to return interim results and will be faster than traversal?
There's no cut and dry answer to this, but with respect to the :last selector you're using, it's a proprietary extension to the Selectors API standard. Because of this, it isn't valid to use with the native .querySelectorAll method.
What Sizzle does is basically try to use your selector with .querySelectorAll, and if it throws an Exception due to an invalid selector, it'll default to a purely JavaScript based DOM selection/filtering.
This means including selectors like :last will cause you to not get the speed boost of DOM selection with native code.
Furthermore, there are optimizations included so that when your selector is very simple, like just an ID or an element name, the native getElementById and getElementsByTagName will be used, which are extremely fast; usually even faster than querySelectorAll.
And since the .last() method just grabs the last item in the collection instead of filtering all the items, which is what Sizzle filters normally do (at least they used to), that also will give a boost.
IMO, keep away from the proprietary stuff. Now that .querySelectorAll is pretty much ubiquitous, there are real advantages to only using standards-compliant selectors. Do any further filtering post DOM selection.
In the case of $("#vacations").find("li"), don't worry about the interim results. This will use getElementById followed by getElementsByTagName, and will be extremely fast.
If you're really super concerned about speed, reduce your usage of jQuery, and use the DOM directly.
You'll currently find notes in the docs for selectors like :last, that warn you about the performance loss:
Because :last is a jQuery extension and not part of the CSS specification, queries using :last cannot take advantage of the performance boost provided by the native DOM querySelectorAll() method. To achieve the best performance when using :last to select elements, first select the elements using a pure CSS selector, then use .filter(":last").
But I'd disagree that .filter(":last") would be a good substitute. Much better would be methods like .last() that will target the element directly instead of filtering the set. I have a feeling that they just want people to keep using their non-standards-compliant selectors. IMO, you're better of just forgetting about them.
Here's a test for your setup: http://jsperf.com/andrey-s-jquery-traversal
Sizzle, jQuery's selector engine, parses the string with regex and tries to speed up very basic selectors by using getElementById and getElementsByTagName. If your selector is anything more complicated than #foo and img, it'll try to use querySelectorAll, which accepts only valid CSS selectors (no :radio, :eq, :checkbox or other jQuery-specific pseudo-selectors).
The selector string is both less readable and slower, so there's really no reason to use it.
By breaking the selector string up into simple chunks that Sizzle can parse quickly (#id and tagname), you're basically just chaining together calls to getElementById and getElementsByTagName, which is about as fast as you can get.

Which is more efficient - $('selector').last() or $('selector:last')?

I have a parent element with a real lot of child elements (1000s). I am looking for the fastest possible way to get a handle to the last child element. The options I've found are:
$('.parent .child').last()
and
$('.parent .child:last')
Any opinions on which one is reliably faster across browsers?
EDIT
I wrote a test in jsfiddle to measure this out and it turns out the difference is pretty much negligible. Though .last() was performing better, the difference is negligible. So i think even with the :last selector, it is actually getting the whole list of elements and then returning the last element? Unbelievable.
Fiddle: http://jsfiddle.net/techfoobar/GFb9f/8/
Many modern browsers support document.querySelectorAll(), so $('.parent .child').last() should be faster, as the selector string can be passed as is, and then the last matched item popped off.
In the latter, the :last is not a standard pseudo selector, and Sizzle has to start chunking the selector string to start matching.
Overall though, I would use what you believe is the most readable. To begin optimising this, first ensure that your application has performance issues and you have identified this selector as the bottleneck.
You have to see this performance test!
UPDATE: There are already good answers on this related question.

performance issue : storing a reference to DOM element vs using selectors

So in my app, the user can create some content inside certain div tags, and each content, or as I call them "elements" has its own object. Currently I use a function to calculate the original div tag that the element has been placed inside using jquery selectors, but I was wondering in terms of performance, wouldn't it be better to just store a reference to the div tag once the element has been created, instead of calculating it later ?
so right now I use something like this :
$('.div[value='+divID+']')
but instead I can just store the reference inside the element, when im creating the element. Would that be better for performance ?
If you have lots of these bindings it would be a good idea to store references to them. As mentioned in the comments, variable lookups are much much faster than looking things up in the DOM - especially with your current approach. jQuery selectors are slower than the pure DOM alternatives, and that particular selector will be very slow.
Here is a test based on the one by epascarello showing the difference between jQuery, DOM2 methods, and references: http://jsperf.com/test-reference-vs-lookup/2. The variable assignment is super fast as expected. Also, the DOM methods beat jQuery by an equally large margin. Note, that this is with Yahoo's home page as an example.
Another consideration is the size and complexity of the DOM. As this increases, the reference caching method becomes more favourable still.
A local variable will be super fast compared to looking it up each time. Test to prove it.
jQuery is a function that builds and returns an object. That part isn't super expensive but actual DOM lookups do involve a fair bit of work. Overhead isn't that high for a simple query that matches an existing DOM method like getElementById or getElementsByClassName (doesn't in exist in IE8 so it's really slow there) but yes the difference is between work (building an object that wraps a DOM access method) and almost no work (referencing an existing object). Always cache your selector results if you plan on reusing them.
Also, the xpath stuff that you're using can be really expensive in some browsers so yes, I would definitely cache that.
Stuff to watch out for:
Long series of JQ params without IDs
Selector with only a class in IE8 or less (add the tag name e.g. 'div.someClass') for a drastic improvement - IE8 and below has to hit every piece of HTML at the interpreter level rather than using a speedy native method when you only use the class
xpath-style queries (a lot of newer browsers probably handle these okay)
When writing selectors consider how much markup has to be looked at to get to it. If you know you only want divs of a certain class inside a certain ID, do one of these $('#theID div.someClass') rather than just $('div.someClass');
But regardless, just on the principle of work avoidance, cache the value if you're going to use it twice or more. And avoid haranguing the DOM with repeated requests as much as you can.
looking up an element by ID is super fast. i am not 100% sure i understand your other approach, but i doubt it would be any better than a simple lookup of an element by its id, browsers know how to this task best. from what you've explained I can't see how your approach would be any faster.

Explore tree structure in javascript in a For Loop

What options do I have to access the elements of a DOM tree in a For Loop ? And if it's too difficult can I convert it to an array ?
thanks,
Bruno
Here's an example on jsfiddle.
If you have any questions, don't hesitate to ask. XML has a magnificent traversal system, this doesn't even begin to cut into the raw power of the DOM.
Also, be sure to check w3schools, although it's not a perfectly reliable source.
The DOM tree allows you to navigate down the levels using .children or .childNodes().
.children() provides an array of DOM elements below the current one, and .childNodes() provides all nodes, including text nodes.
You can also use getElementById() to get a specific node (much quicker than any array search could ever be), and getElementsByTagName() to get all elements of a particular type.
I definitely wouldn't recommend converting it to an array -- the DOM tree as it stands is much more flexible than any array.
If you need more flexibility, you could try JQuery, which gives you even more flexibility for searching the DOM by adding complex CSS-style selector queries to the mix. (modern browsers also provide this natively with the getElementsBySelector() method, but this isn't available in all browsers, so you're better off using JQuery or similar for this for the time being)

jQuery Clone Performance

I've read that javascript gets significant performance benefits from modifying off-dom. Earlier today, I was reading the clone documentation:
"Note that when using the .clone()
method, we can modify the cloned
elements or their contents before
(re-)inserting them into the
document."
Is the implication then that if I have 1,000 LI's and I want to make a change across all of them, the most efficient method would be to clone it, modify the clone, destroy the original, and place the clone?
How would you go about making this modification in the most efficient way?
Actually, the implication is that it would be more efficient to modify cloned elements before inserting them into the DOM than to insert the cloned elements into the document and then modify them. Whether or not clone-modify-replace is more efficient than simply modifying the elements in-place will likely depend a lot on what modifications you intend to make... As always, profile your code and then choose the option that best meets your needs based on real data.
...And while you're at it... You can "detach" a DOM element directly: just call removeChild() (or, since you're using jQuery, detach()) - the element will still exist as long as you retain a reference to it, and can be re-inserted after you're done making modifications.
...Oh, and regardless of which technique you end up using, you'll almost certainly see better results from removing the parent UL than from removing each of the 1K child LI elements, one at a time...
The detach() method is the method designed for exactly what you're trying to do:
http://api.jquery.com/detach/
Edit: It's worth a mention that everyone do their own profiling/tests, which is good, common sense advice. Plus it's fun to see the ridiculous performance gains you'll get. :)
My rule of thumb is this:
If you're doing manipulations on many elements that involves adding or removing or moving elements around, you should absolutely use .detach(), if you're doing something like addClass, don't use detach.
If you're unsure about specific manipulations or how many qualifies as 'many', you should run a test.
Here's a simple comparison I made in this question's debate:
http://jsbin.com/uwode3/5 vs http://jsbin.com/uwode3/4
There's one more efficient method: .detach() them from the tree, modify, and then reinsert.
However, if you only modify properties of the DOM objects, and not read them, you shouldn't trigger a reflow (which is the slow operation) anyway, at least not in firefox (from what I've read). Though detaching them, and then reattaching makes sure that there are at most two reflows.

Categories