The things I'm trying to get my website to do are getting fairly complex, some of the things that need to be done take long enough that it gives visible delays.
Mainly, adding content to the document slows things down as well as jquery animations that I'm using.
I create elements using document.createElement("div") rather than just doing document.write("<div id='someId'></div>"), and the animations are the custom animation using .animation();
I'm wondering how I can continue to add content as I am but prevent the browser from freezing every time I want to add something.
Any suggestions for speeding things up so there is less of a delay?
Tips on what to avoid in javascript programming that gives increased delay would be really helpful.
This sounds more like you want to improve the DOM interaction performance rather than javascript, so in that vein:
Yes, document.write is bad, it blocks additional loading (any JS executing before your pages has finished loading basically requires all other processing to stop -- modern browsers like Safari (and by proxy Chrome) and Firefox do a degree of content preloading in order to prevents loads from blocking but subsequent style resolution, etc is largely blocked.
document.createElement is in general the best solution, although their are certain cases where just manipulating innerHTML on an element may be faster -- but that is not yet cross browser compatible (i think innerHTML doesn't exist till Firefox 3.5) and the perform characteristics are tricky.
Reduce the amount of content you load initially -- eg. if you have lots of content (or content that requires large scripts) attempt to delay loading until after the initial page load has completed.
Oh, for animations you should look at CSS animations and they perform much better than any JS implementations, but they're only present in Safari (and by proxy Chrome) and Firefox 3.5 -- definitely not in IE :-(
In terms of JavaScript performance, avoid with and getters/setters like the plague and you should be fine in most modern JS implementations.
I know this is a couple of years late but better late than never considering this question popped up at the top of google's list for javascript animation optimization.
If you need to add a large amount of elements to the DOM and don't want to use innerHTML to do so you can speed that up I believe by adding the elements to a documentFragment first and then injecting the documentFragment all at once into the DOM. I remember reading that DOM injection is the slowest part of using the standard API to create and add elements. documentFragment allows you to inject all of its contents into the DOM in one go eliminating a large amount of overhead caused by calling appendChild or other insertion methods multiple times. appendChild and other insertion methods supposedly don't suffer the same costs when inserting into a documentFragment opposed to inserting into the DOM. I wish I could find the article I read this in as it had some really nice benchmark tests and explanations. I'm sure you can find more information by doing some searches.
As always though which method to use should be determined by testing on a case by case basis. Eventually you'd probably learn which one to use based on structure of the elements, amount, and use.
PS. There may be an issue with styles and documentFragment but I can't remember what was said about them, either way it'd be something worth checking out.
JQuery is great for manipulating the DOM. I'd take a look at their code in order to get some ideas. Or, seeing how I'm way lazy, I'd just use their stuff.
In one of the podcasts Jeff Atwood mentioned you'd have to be crazy to write your own javascript... after using JQuery, I'd have to agree with him.
Although I am a huge fan of jQuery, and it makes development much easier, note that a jQuery statement (or in any JavaScript library) will never be as fast as its semantic equivalent in plain JavaScript because of the extra overhead.
That being said, these are my goto references whenever I want to speed up my JavaScript.
http://home.earthlink.net/~kendrasg/info/js_opt/
http://www.miislita.com/searchito/javascript-optimization.html
There is document fragment function which can make life much more easier. Here is the example use.
function appendDivs(element){
var fragment = document.createDocumentFragment();
for(var i=0;i<10;i++){
var div = document.createElement("div");
fragment.appendChild(div);
}
element.appendChild(fragment);
}
For a complete reference, you can see this post on my blog.
What surprises most people is that using innerHTML is, actually, the fastest way there is to manipulate the DOM.
You can see one benchmark here for example.
// Some functions for easier coding in DOM
function createEl(el){
return document.createElement(el);
}
function addElBody(el){
return document.body.appendChild(el);
}
function addElChild(el, child){ //el- parent
return el.appendChild(child);
}
// Example :
var container=createEl('div');
container.setAttribute('id', 'container');
addElBody(container);
var item=createEl('ul');
item.setAttribute('id', 'list');
addElBody(item);
addElChild(container, item);
Related
I was wondering when to use DOM-based generation versus .innerHTML or appending strings using JQuery's .append method? I read a related post here Should you add HTML to the DOM using innerHTML or by creating new elements one by one? but I'm still unsure of the use case for each method.Is it just a matter of performance where I would always choose one over the other?
Let's say that form is an arbitrary variable:
DOM generation
var div = document.createElement("div"),
label = document.createElement("label"),
input = document.createElement("input");
div.appendChild(label);
div.appendChild(input);
form.appendChild(div);
JQuery
$(form).append("<div><label></label><input></input></div>")
The second one is more readable, although that comes from jQuery which does the innerHTML work for you. In vanilla JS, it would be like this:
form.insertAdjacentHTML("beforeend", "<div><label></label><input></input></div>");
...which I think beats even jQuery. Although, you should not worry about performance. The performance always depends on the amount of nodes to insert - for single ones, the HTML parser would be slower than creating them directly, for large HTML strings the native parser is faster than the script. If you really do worry about performance, you will need to test, test, test (and I'd say there is something wrong with your app).
Yet, there is a great difference between the two methods: With #1, you have three variables with references to the DOM elements. If you would for example like to add an event listener to the input, you can immediately do it and don't need to call a querySelector on form, which would be much slower. Of course, when inserting really many elements - with innerHTML -, you wouldn't need to do that at all because you would use delegated events for a real performance boost then.
Note that you can also shorten method #1 with jQuery to a oneliner:
var div, label, input;
$(form).append(div=$("<div/>").append(input=$("<input/>"),label=$("<label/>")));
My conclusion:
For creating only few elements the DOM approach is cleaner.
Mostly, html strings are more readable.
None of the two is faster in standard situations - benchmark results vary wide.
Personally, I don't like (direct) innerHTML for a few reasons, which are outlined well in these two answers and here as well. Also, IE has a bug on tables (see Can't set innerHTML on tbody in IE)
Generally speaking, hitting the DOM repeatedly is much slower than say swapping out a big block of HTML with innerHTML. I believe there are two reasons for this. One is reflow. The browser has to recalc for potential layout impact across potentially wide variety of elements. The other, I believe, and somebody correct me if I'm wrong, is that there's a bit of overhead involved in translating the stuff going on at the browser's post-compiled execution environment where rendering and layout state is being handled into an object you can use in JavaScript. Since the DOM is often under constantly changing conditions you have to run through the process every time with few opportunities to cache results of any kind, possibly to a degree even if you're just creating new elements without appending them (since you're likely to going to want pre-process CSS rules and things like what 'mode' the browser is in due to doctype, etc, that can be applied in a general context beforehand).
DOM methods allow you construct document fragments and create and append HTML element to those without affecting the actual document layout, which helps you avoid unnecessary reflow.
But here's where it gets weird.
Inserting new HTML into a node with nothing in it - close to a tie or something innerHTML is typically much faster at in a lot of (mostly older) browsers
Replacing a ton of HTML contents - this is actually something where DOM methods tend to win out when performance isn't too close to call.
Basically, innerHTML, if it stinks, tends to stink at the teardown process where large swaps are happening. DOM methods are better at teardown but tend to be slower at creating new HTML and injecting directly without replacing anything when there's any significant difference at all.
There are actually hybrid methods out there that can do pretty marvelous things for performance when you have the need. I used one over a year ago and was pretty impressed by response time improvement for swapping large swathes of HTML content for a lazy-loading grid vs. just using innerHTML alone. I wish I could find a link to the guy who deserves credit for figuring this out and spelling it out on the web (author, has written a lot of RegEx stuff too - couldn't google for the life of me).
As a matter of style vs perf, I think you should avoid tweaking the actual DOM node structure repeatedly but constructing HTML in a document fragment beforehand vs. using innerHTML is pretty much a matter of judgement. I personally like innerHTML for the most part because JS has a lot of powerful string methods that can rapidly convert data to HTML-ready strings. For instance:
var htmlStr = '<ul><li>' + arrayOfNames.join('</li><li>') + '</li></ul>';
That one-liner is a UL I can assign directly to innerHTML. It's almost as easy to build complete tables with the right data structure and a simple while loop. Now go build the same UL with as many LIs as the length of the arrayOfNames with the DOM API. I really can't think of a lot of good reasons to do that to yourself. innerHTML became de facto standard for a reason before it was finally adopted into the HTML 5 spec. It might not fit the node-based htmlElement object tweaking approach of the DOM API but it's powerful and helps you keep code concise and legible. What I would not likely do, however, is use innerHTML to edit and replace existing content. It's much safer to work from data, build, and swap in new HTML than it is to refer to old HTML and then start parsing innerHTML strings for attributes, etc when you have DOM manipulation methods convenient and ready for that.
Your chief performance concern should probably be to avoid hammering away at the 'live' portions of the DOM, but the rest I would leave up to people as a matter of style and testing where HTML generation is concerned. innerHTML is and has for years now been in the HTML5 working draft, however, and it is pretty consistent across modern browsers. It's also been de facto spec for years before that and was perfectly viable as an option since before Chrome was new, IMO but that's a debate that's mostly done at this point.
It is just a matter of performance. Choose the one that fits you best.
jsPerf is full of those performance test, like this one: test
Disclaimer:
I've blathered on kind-of excessively here in an attempt to provide enough context to pre-empt all questions you folks might have of me. Don't be scared by the length of this question: much of what I've written is very skim-able (especially the potential solutions I've come up with).
Goal:
The effect I'm hoping to achieve is displaying the same element (and all descendants) in multiple places on the same page. My current solution (see below for more detail) involves having to clone/copy and then append in all the other places I want it to appear in the DOM. What I'm asking for here is a better (more efficient) solution. I have a few ideas for potentially more efficient solutions (see below). Please judge/criticize/dismiss/augment those, or add your own more-brilliant-er solution!
"Why?" you ask?
Well, the element (and it's descendants) that I'm wanting to display more than once potentially has lots of attributes and contents - so cloning it, and appending it someplace else (sometimes more than one other place) can get to be quite a resource-hogging DOM manipulation operation.
Some context:
I can't describe the situation exactly (damn NDA's!) but essentially what I've got is a WYSIWYG html document editor. When a person is editing the DOM, I'm actually saving the "original" node and the "changed" node by wrapping them both in a div, hiding the "original" and letting the user modify the new ("changed") node to their heart's content. This way, the user can easily review the changes they've made before saving them.
Before, I'd just been letting the user navigate through the "diff divs" and temporarily unhiding the "original" node, to show the changes "inline". What I'm trying to do now is let the user see the whole "original" document, and their edited ("changed") document in a side-by-side view. And, potentially, I'd like to save the changes through multiple edit sessions, and show 'N' number of versions side-by-side simultaneously.
Current Solution:
My current solution to achieve this effect is the following:
Wrap the whole dang dom (well, except the "toolbars" and stuff that they aren't actually editing) in a div (that I'll call "pane1"), and create a new div (that I'll call "pane2"). Then deep-clone pane1's contents into pane2, and in pane1 only show the "original" nodes, and in pane2 only show the "changed" nodes (in the diff regions - everything outside of that would be displayed/hidden by a toggle switch in a toolbar). Then, repeat this for panes 3-through-N.
Problem with Current Solution:
If the document the user is editing gets super long, or contains pictures/videos (with different src attributes) or contains lots of fancy styling things (columns, tables and the like) then the DOM can potentially get very large/complex, and trying to clone and manipulate it can make the browser slow to a crawl or die (depending on the DOM's size/complexity and how many clones need to be made as well as the efficiency of the browser/the machine it's running on). If size is the issue I can certainly do things like actually remove the hidden nodes from the DOM, but that's yet more DOM manipulation operations hogging resources.
Potential Solutions:
1. Find a way to make the DOM more simple/lightweight
so that the cloning/manipulating that I'm currently doing is more efficient. (of course, I'm trying to do this as much as I can anyway, but perhaps it's all I can really do).
2. Create static representations of the versions with Canvas elements or something.
I've heard there's a trick where you can wrap HTML in an SVG element, then use that as an image source and draw it onto a canvas. I'd think that those static canvasses (canvi?) would have a much smaller memory footprint than cloned DOM nodes. And manipulating the DOM (hiding/showing the appropriate nodes), then drawing an image (rinse & repeat) should be quicker & more efficient than cloning a node and manipulating the clones. (maybe I'm wrong about that? Please tell me!)
I've tried this in a limited capacity, but wrapping my HTML in SVG messes with the way it's rendered in a couple of weird cases - perhaps I just need to message the elements a bit to get them to display properly.
3. Find some magic element
that just refers to another node and looks/acts like it without being a real clone (and therefore being somehow magically much more lightweight). Even if this meant that I couldn't manipulate this magic element separately from the node it's "referencing" (or its fake children) - in that case I could still use this for the unchanged parts, and hopefully shave off some memory usage/DOM Manipulation operations.
4. Perform some of the steps on the server side.
I do have the ability to execute server side code, so maybe it's a lot more efficient (some of my users might be on mobile or old devices) to get all ajax-y and send the relevant part of the DOM (could be the "root" of the document the user is editing, or just particularly heavy "diff divs") to the server to be cloned/manipulated, then request the server-manipulated "clones" and stick 'em in their appropriate panes/places.
5. Fake it to make it "feel" more efficient
Rather than doing these operations all in one go and making the browser wait till the operations are done to re-draw the UI, I could do the operations in "chunks" and let the browser re-render and catch a breather before doing the next chunk. This probably actually would result in more time spent, but to the casual user it might "feel" quicker (haha, silly fools...). In the end, I suppose, it is user experience that is what's most important.
Footnote:
Again, I'm NDA'd which prevents me from posting the actual code here, as much as I'd like to. I think I've thoroughly explained the situation (perhaps too thoroughly - if such a thing exists) so it shouldn't be necessary for you to see code to give me a general answer. If need be, I suppose I could write up some example code that differs enough from my company's IP and post it here. Let me know if you'd like me to do that, and I'll be happy to oblige (well, not really, but I'll do it anyway).
Take a look at CSS background elements. They allow you to display DOM nodes elsewhere/repeatedly. They are of course they are read-only, but should update live.
You may still have to come up with a lot of magic around them, but it is a similar solution to:
Create static representations of the versions with Canvas elements or something.
CSS background elements are also very experimental, so you may not get very far with them if you have to support a range of browsers.
To be honest, after reading the question I almost left thinking it belongs in the "too-hard-basket", but after some thought perhaps I have some ideas.
This is a really difficult problem and the more I think about it the more I realise that there is no real way to escape needing to clone. You're right in that you can create an SVG or Canvas but it won't look the same, though with a fair amount of effort I'm sure you can get quite close but not sure how efficient it will be. You could render the HTML server-side, take a snapshot and send the image to the client but that's definitely not scalable.
The only suggestions I can think of are as follows, sorry if they are long-winded:
How are you doing this clone? If you're going through each element and as you go through each you are creating a clone and copying the attributes one by one then this is heaavvvyy. I would strongly suggest using jQuery clone as my guess is that it's more efficient than your solution. Also, when you are making structural changes it might be useful to take advantage of jQuery's detach/remove (native JS: removeChild()) methods as this will take the element out of the DOM so you can alter it before reinserting.
I'm not sure how you'v got your WYSIWYG, but avoid using inputs as they are heavy. If you must then I'm assuming they don't look like inputs so just swap them out with another element and style (CSS) to match. Make sure you do these swaps before you reinsert the clone in to the DOM.
Don't literally put video at the time of showing the user comparisions. The last thing we want to do is inject 3rd party objects in to the page. Use an image, you only have to do it while comparing. Once again, do the swap before inserting the clone in to the DOM.
I'm assuming the cloned elements won't have javascript attached to them (if there is then remove it, less moving parts is more efficiency). However, the "changed" elements will probably have some JS events attached so perhaps remove them for the period of comparision.
Use Chrome/FF repaint/reflow tools to see how your page is working when you restructure the DOM. This is important because you could be doing some "awesome" animations that are costing you intense resources. See http://paulirish.com/2011/viewing-chromes-paint-cycle/
Use CSS over inline styling where possible as modern browsers are optimised to handle CSS documents
Can you make it so your users use a fast modern browser like Chrome? If it's internal then might be worth it.
Can you do these things in Silverlight or Adobe Air? These objects get special resource privileges, so this will most likely solve your problem (according to what I'm imagining the depth of the problem is)
This one is a bit left-field but could you open in another window? Modern browsers like Chrome will run the other window in its own process which may help.
No doubt you've probably looked in to these things more than I but good luck with it. Would be curious how you solved it.
You may also try: http://html2canvas.hertzen.com/
If it works for you canvas has way better support.
In order to get your "side-by-side" original doc and modified doc.. rather than cloning all pane1 into pane2.. could you just load the original document in an iframe next to the content you are editing? A lot less bulky?
You could tweak how the document is displayed when it's in an iframe (e.g. hide stuff outside editable content).
And maybe when you 'save' a change, write changes to the file (or a temp) and open it up in a new iframe? That might accomplish your "multiple edit sessions"... having multiple iframes displaying the document in various states.
Just thinking out loud...
(Sorry in advance if I'm missing/misunderstanding any of your goals/requirements)
I don't know if it's already the case for you but you should consider using jQuery library as it allows to perform different kinds of DOM elements manipulation such as create content and insert it into several elements at once or select an element on the page and insert it into another.
Have a look on .appendTo(), .html(), .text(), .addClass(), .css(), .attr(), .clone()
http://api.jquery.com/category/manipulation/
Sorry if I'm just pointing out something you already know or even work with but your NDA is in the way of a more accurate answer.
I'm trying to improve a websites rendering speed.
Both CSS and JS files mostly reference elements like this:
Javascript:
$('.some_element').doSth()
CSS:
.some_element { /* do something */ }
Just curious - is this the optimal way of referencing elements in terms of javascript parsing and website rendering? Wouldn't it be better to do something like div.some_element?
Thanks for some infos!
If speed is a priority you might want to switch to vanilla javascript as much as you can. Native javascript is faster than jQuery.
If you want to keep your jQuery selector use parent context to make the search for the element more efficient. Example $('#parent').find(child)
You can find more tips on javascript an jquery optimization on the web:
http://engineeredweb.com/blog/10/12/3-tips-make-your-jquery-selectors-faster/
div.some_element will be different than .some_element, if you don't just have divs that use the some_element class.
Maybe compare render times using Chrome's built-in developer tools (or an alternative) to see if it helps you, but I doubt it'll be significant.
The fastest way to find a single element is usually with an id, not a class:
document.getElementById("whatever")
If you have to use jQuery (which is not as fast as plain javascript), then you would use:
$("#whatever")
If speed is really important, you can resolve the DOM element once when the page loads and just save the direct DOM reference so you don't have to find it when your code actually executes later.
As with all questions of performance, the only real way to answer a performance question is to benchmark a couple of implementation options and actually test which is faster.
I have noticed while monitoring/attempting to answer common jQuery questions, that there are certain practices using javascript, instead of jQuery, that actually enable you to write less and do ... well the same amount. And may also yield performance benefits.
A specific example
$(this) vs this
Inside a click event referencing the clicked objects id
jQuery
$(this).attr("id");
Javascript
this.id;
Are there any other common practices like this? Where certain Javascript operations could be accomplished easier, without bringing jQuery into the mix. Or is this a rare case? (of a jQuery "shortcut" actually requiring more code)
EDIT : While I appreciate the answers regarding jQuery vs. plain javascript performance, I am actually looking for much more quantitative answers. While using jQuery, instances where one would actually be better off (readability/compactness) to use plain javascript instead of using $(). In addition to the example I gave in my original question.
this.id (as you know)
this.value (on most input types. only issues I know are IE when a <select> doesn't have value properties set on its <option> elements, or radio inputs in Safari.)
this.className to get or set an entire "class" property
this.selectedIndex against a <select> to get the selected index
this.options against a <select> to get a list of <option> elements
this.text against an <option> to get its text content
this.rows against a <table> to get a collection of <tr> elements
this.cells against a <tr> to get its cells (td & th)
this.parentNode to get a direct parent
this.checked to get the checked state of a checkbox Thanks #Tim Down
this.selected to get the selected state of an option Thanks #Tim Down
this.disabled to get the disabled state of an input Thanks #Tim Down
this.readOnly to get the readOnly state of an input Thanks #Tim Down
this.href against an <a> element to get its href
this.hostname against an <a> element to get the domain of its href
this.pathname against an <a> element to get the path of its href
this.search against an <a> element to get the querystring of its href
this.src against an element where it is valid to have a src
...I think you get the idea.
There will be times when performance is crucial. Like if you're performing something in a loop many times over, you may want to ditch jQuery.
In general you can replace:
$(el).attr('someName');
with:
Above was poorly worded. getAttribute is not a replacement, but it does retrieve the value of an attribute sent from the server, and its corresponding setAttribute will set it. Necessary in some cases.
The sentences below sort of covered it. See this answer for a better treatment.
el.getAttribute('someName');
...in order to access an attribute directly. Note that attributes are not the same as properties (though they mirror each other sometimes). Of course there's setAttribute too.
Say you had a situation where received a page where you need to unwrap all tags of a certain type. It is short and easy with jQuery:
$('span').unwrap(); // unwrap all span elements
But if there are many, you may want to do a little native DOM API:
var spans = document.getElementsByTagName('span');
while( spans[0] ) {
var parent = spans[0].parentNode;
while( spans[0].firstChild ) {
parent.insertBefore( spans[0].firstChild, spans[0]);
}
parent.removeChild( spans[0] );
}
This code is pretty short, it performs better than the jQuery version, and can easily be made into a reusable function in your personal library.
It may seem like I have an infinite loop with the outer while because of while(spans[0]), but because we're dealing with a "live list" it gets updated when we do the parent.removeChild(span[0]);. This is a pretty nifty feature that we miss out on when working with an Array (or Array-like object) instead.
The correct answer is that you'll always take a performance penalty when using jQuery instead of 'plain old' native JavaScript. That's because jQuery is a JavaScript Library. It is not some fancy new version of JavaScript.
The reason that jQuery is powerful is that it makes some things which are overly tedious in a cross-browser situation (AJAX is one of the best examples) and smooths over the inconsistencies between the myriad of available browsers and provides a consistent API. It also easily facilitates concepts like chaining, implied iteration, etc, to simplify working on groups of elements together.
Learning jQuery is no substitute for learning JavaScript. You should have a firm basis in the latter so that you fully appreciate what knowing the former is making easier for you.
-- Edited to encompass comments --
As the comments are quick to point out (and I agree with 100%) the statements above refer to benchmarking code. A 'native' JavaScript solution (assuming it is well written) will outperform a jQuery solution that accomplishes the same thing in nearly every case (I'd love to see an example otherwise). jQuery does speed up development time, which is a significant benefit which I do not mean to downplay. It facilitates easy to read, easy to follow code, which is more than some developers are capable of creating on their own.
In my opinion then, the answer depends on what you're attempting to achieve. If, as I presumed based on your reference to performance benefits, you're after the best possible speed out of your application, then using jQuery introduces overhead every time you call $(). If you're going for readability, consistency, cross browser compatibility, etc, then there are certainly reasons to favor jQuery over 'native' JavaScript.
There's a framework called... oh guess what? Vanilla JS. Hope you get the joke... :D It sacrifices code legibility for performance... Comparing it to jQuery bellow you can see that retrieving a DOM element by ID is almost 35X faster. :)
So if you want performance you'd better try Vanilla JS and draw your own conclusions. Maybe you won't experience JavaScript hanging the browser's GUI/locking up the UI thread during intensive code like inside a for loop.
Vanilla JS is a fast, lightweight, cross-platform framework for
building incredible, powerful JavaScript applications.
On their homepage there's some perf comparisons:
There's already an accepted answer but I believe no answer typed directly here can be comprehensive in its list of native javascript methods/attributes that has practically guaranteed cross-browser support. For that may I redirect you to quirksmode:
http://www.quirksmode.org/compatibility.html
It is perhaps the most comprehensive list of what works and what doesn't work on what browser anywhere. Pay particular attention to the DOM section. It is a lot to read but the point is not to read it all but to use it as a reference.
When I started seriously writing web apps I printed out all the DOM tables and hung them on the wall so that I know at a glance what is safe to use and what requires hacks. These days I just google something like quirksmode parentNode compatibility when I have doubts.
Like anything else, judgement is mostly a matter of experience. I wouldn't really recommend you to read the entire site and memorize all the issues to figure out when to use jQuery and when to use plain JS. Just be aware of the list. It's easy enough to search. With time you will develop an instinct of when plain JS is preferable.
PS: PPK (the author of the site) also has a very nice book that I do recommend reading
When:
you know that there is unflinching cross-browser support for what you are doing, and
it is not significantly more code to type, and
it is not significantly less readable, and
you are reasonably confident that jQuery will not choose different implementations based on the browser to achieve better performance, then:
use JavaScript. Otherwise use jQuery (if you can).
Edit: This answer applies both when choosing to use jQuery overall versus leaving it out, as well as choosing whether to to use vanilla JS inside jQuery. Choosing between attr('id') and .id leans in favor of JS, while choosing between removeClass('foo') versus .className = .className.replace( new Regexp("(?:^|\\s+)"+foo+"(?:\\s+|$)",'g'), '' ) leans in favor of jQuery.
Others' answers have focused on the broad question of "jQuery vs. plain JS." Judging from your OP, I think you were simply wondering when it's better to use vanilla JS if you've already chosen to use jQuery. Your example is a perfect example of when you should use vanilla JS:
$(this).attr('id');
Is both slower and (in my opinion) less readable than:
this.id.
It's slower because you have to spin up a new JS object just to retrieve the attribute the jQuery way. Now, if you're going to be using $(this) to perform other operations, then by all means, store that jQuery object in a variable and operate with that. However, I've run into many situations where I just need an attribute from the element (like id or src).
Are there any other common practices
like this? Where certain Javascript
operations could be accomplished
easier, without bringing jQuery into
the mix. Or is this a rare case? (of a
jQuery "shortcut" actually requiring
more code)
I think the most common case is the one you describe in your post; people wrapping $(this) in a jQuery object unnecessarily. I see this most often with id and value (instead using $(this).val()).
Edit: Here's an article that explains why using jQuery in the attr() case is slower. Confession: stole it from the tag wiki, but I think it's worth mentioning for the question.
Edit again: Given the readability/performance implications of just accessing attributes directly, I'd say a good rule of thumb is probably to try to to use this.<attributename> when possible. There are probably some instances where this won't work because of browser inconsistencies, but it's probably better to try this first and fall back on jQuery if it doesn't work.
If you are mostly concerned about performance, your main example hits the nail on the head. Invoking jQuery unnecessarily or redundantly is, IMHO, the second main cause of slow performance (the first being poor DOM traversal).
It's not really an example of what you're looking for, but I see this so often that it bears mentioning: One of the best ways to speed up performance of your jQuery scripts is to cache jQuery objects, and/or use chaining:
// poor
$(this).animate({'opacity':'0'}, function() { $(this).remove(); });
// excellent
var element = $(this);
element.animate({'opacity':'0'}, function() { element.remove(); });
// poor
$('.something').load('url');
$('.something').show();
// excellent
var something = $('#container').children('p.something');
something.load('url').show();
I've found there is certainly overlap between JS and JQ. The code you've shown is a good example of that. Frankly, the best reason to use JQ over JS is simply browser compatibility. I always lean toward JQ, even if I can accomplish something in JS.
This is my personal view, but as jQuery is JavaScript anyway, I think theoretically it cannot perform better than vanilla JS ever.
But practically it may perform better than hand-written JS, as one's hand-written code may be not as efficient as jQuery.
Bottom-line - for smaller stuff I tend to use vanilla JS, for JS intensive projects I like to use jQuery and not reinvent the wheel - it's also more productive.
The first answer's live properties list of this as a DOM element is quite complete.
You may find also interesting to know some others.
When this is the document :
this.forms to get an HTMLCollection of the current document forms,
this.anchors to get an HTMLCollection of all the HTMLAnchorElements with name being set,
this.links to get an HTMLCollection of all the HTMLAnchorElements with href being set,
this.images to get an HTMLCollection of all the HTMLImageElements
and the same with the deprecated applets as this.applets
When you work with document.forms, document.forms[formNameOrId] gets the so named or identified form.
When this is a form :
this[inputNameOrId] to get the so named or identified field
When this is form field:
this.type to get the field type
When learning jQuery selectors, we often skip learning already existing HTML elements properties, which are so fast to access.
As usual I'm coming late to this party.
It wasn't the extra functionality that made me decide to use jQuery, as attractive as that was. After all nothing stops you from writing your own functions.
It was the fact that there were so many tricks to learn when modifying the DOM to avoid memory leaks (I'm talking about you IE). To have one central resource that managed all those sort of issues for me, written by people who were a whole lot better JS coders than I ever will be, that was being continually reviewed, revised and tested was god send.
I guess this sort of falls under the cross browser support/abstraction argument.
And of course jQuery does not preclude the use of straight JS when you needed it. I always felt the two seemed to work seamlessly together.
Of course if your browser is not supported by jQuery or you are supporting a low end environment (older phone?) then a large .js file might be a problem. Remember when jQuery used to be tiny?
But normally the performance difference is not an issue of concern. It only has to be fast enough. With Gigahertz of CPU cycles going to waste every second, I'm more concerned with the performance of my coders, the only development resources that doesn't double in power every 18 months.
That said I'm currently looking into accessibility issues and apparently .innerHTML is a bit of a no no with that. jQuery of course depends on .innerHTML, so now I'm looking for a framework that will depend on the somewhat tedious methods that are allowed. And I can imagine such a framework will run slower than jQuery, but as long as it performs well enough, I'll be happy.
Here's a non-technical answer - many jobs may not allow certain libraries, such as jQuery.
In fact, In fact, Google doesn't allow jQuery in any of their code (nor React, because it's owned by Facebook), which you might not have known until the interviewer says "Sorry, but you cant use jQuery, it's not on the approved list at XYZ Corporation". Vanilla JavaScript works absolutely everywhere, every time, and will never give you this problem. If you rely on a library yes you get speed and ease, but you lose universality.
Also, speaking of interviewing, the other downside is that if you say you need to use a library to solve a JavaScript problem during a code quiz, it comes across like you don't actually understand the problem, which looks kinda bad. Whereas if you solve it in raw vanilla JavaScript it demonstrates that you actually understand and can solve every part of whatever problem they throw in front of you.
$(this) is different to this :
By using $(this) you are ensuring the jQuery prototype is being passed onto the object.
Back in 2005, Quirksmode.com released this article:
http://www.quirksmode.org/dom/classchange.html
that showed "proof" that changing the style of an element by changing its class (ie. "elem.className = x") was almost twice as fast as changing its style via its style property (ie. "elem.style.someStyle = x"), except in Opera. As a result of that article, we started using a className-based solution to do things like showing/hiding elements on our site.
The problem is, one of our developers would much rather use jQuery's equivalent methods for this kind of thing (ie. "$(something).hide()"), and I'm having a hard time convincing him that our className-based function is worth using, as I can only find a single article written four years ago.
Does anyone know of any more recent or more comprehensive investigations in to this issue?
Micro-optimization is evil. I think unless you are hiding a seriously large amount of elements at once or something, the difference in milliseconds is unimportant if by some chance that article is still relevant nowadays.
With that in mind, I would go with jQuery's methods as they are battle tested and more concise.
There is a flaw in the benchmark that article uses.
In my personal experience I've never seen a case where updating a className outperforms inline style setting. I have no concrete proof of this (I do vaguely remember an article I'm going to try to dig up), but I have noticed that large clientside apps (for example gmail, or google maps) prefer setting inline styles to classNames, and it was in the context of analysis of these apps that I first heard of the speed increase in doing so.
Note that I am not promoting one over the other: setting the className dynamically goes a long way in terms of maintainability/readability and separating concerns.
While I generally agree with the practice of using classes over style attributes for many reasons. Performance is one but consistency is another. I've often seen people do things like:
function toggle(element) {
element.style.display = element.style.display == 'none' ? 'block' : 'none';
}
(or the jQuery equivalent)
Seems reasonable right? It is until you apply it to table elements that have default display values of, for example, table-cell (not block). The class method:
.hidden { display: none; }
...
function toggle(element) {
$(element).toggleClass("hidden");
}
is much, much better for this and other reasons.
But the jQuery methods like hide() are an exception to this. They handle display setting correctly and give you the animations.
Test it yourself. There's a tester on the page you linked. Test performance on your target browsers to determine the best method to use, performance-wise.
Personally, I would choose readability over performance. Besides, the tides may turn later (if they haven't already**). If it makes sense to use a class (i.e. you use the style for many elements), you might as well use a class. If the CSS is for an animation on an element, use style. For either, prefer jQuery's functions as they are (1) more consistent and (2) more robust and tested.
**For Opera 10, at least, the speed has definitely improved. The tests are 5/12ms locally for Opera 10, 57/88ms for Firefox 3, 14/36ms for Google Chrome, and 125/118ms (!) for IE7. IE7 (perhaps your target browser) has about the same speed for either, with style changing slightly in the lead!
Premature optimization is the root of all evil.
Are you hiding hundreds of elements per second? If not, I'd say there are bigger fish to fry. Worse, a study done in 2005 says nothing about the performance of modern browsers. No IE7 or 8, No Firefox 3 (was 2 even out?), no Chrome.
But if you insist and want your fellow colleagues to follow suit, you should write a jQuery plugin that "hide()"'s using a class instead of a style.
No, but I can tell you that jQuery.hide() can hide an element with a smooth effect. This cannot be done by changing the className.
You are talking about client code, it runs in the user browser so it does not load your server. I ignore the javascript in your client side but I guess is not crunching much CPU.
Your coleague use of jQuery is probably not having a great impact while it results in a more readable code. Therefore I think he does not need to be convinced at all.