We use jQuery functions very often. Now, in terms of performance, I heard that it is generally a bad idea, since on one hand, it is easy to write, understand and maintain, but it is said to be slower than the "ugly" raw JavaScript code.
Does it really matter in terms of performance regarding the end-user-experience whether you use jQuery functions or the original functions?
For example $('#exampleId').hide() vs document.getElementById('exampleId').style.display='none'?
If so, is there a special minifier who converts bad performing jQuery notations into faster running ones?
Or are these just micro optimizations which can in most cases totally be ignored?
Optimizing code that has not been written is a great way to waist time.
jQuery may be slow, but it cuts down on a tremendous amount of bloat in the javascript syntax.
Keeping things in perspective is important, a 1% improvement in bottleneck performance is worth more than a 100% performance improvement on lightly traversed code.
That being said....
Here are some ways to improve jQuery performance
Use native javascript for dom interactions:
$(jQuery)[0] returns the native selector, allowing you to use native dom interactions where needed. Using this in loops, events, and other bottelnecking code for a performance boost is strongly recommended. (example)
Favor IDs over Classes
This one is kind of a given. jQuery's class object's performance overhead is much larger than its ID object overhead.
When you use Classes, give them a context
Prevent jQuery from traversing the whole DOM using selectors. Swapping out $('.class') for $('.class', '#class-container') will hugely help performance.
Reuse selectors
Always cache your selectors in variables if you use them more than once..... and NEVER select elements inside a loop.
Don't use .Each
It may be tempting, but any for loop will be substantially faster.
I'll let you decide whether or not jQuery looks uglier than raw JavaScript. My opinion is that jQuery is less verbose, which I enjoy more.
Now for performance. jQuery is not as performant as raw JavaScript which is true and the link below will clearly show. Unless you are working on pages with and using thousands of element selections and jQuery functions then I'd suggest moving towards raw JavaScript.
http://jsperf.com/jquery-hide-vs-javascript-hide
TL;DR - micro optimizations which can be ignored, usually.
Related
Like many folks I learned JavaScript by learning jQuery.
Lately I have been replacing bits like:
$(this).attr('title') with this.title
$(this).attr('id') with this.id
$(this).val() with this.value
$(this).parent() with this.parentNode
$(this).attr('class') with this.className
Not only is my code cleaner but technically faster.
Is this type of reduction acceptable and encouraged?
Are there any other common practices I should be doing in raw plain JavaScript instead of jQuery?
Are there any potential cross browser issues with this type of reduction-ism?
Whilst using native Javascript functions are generally faster than their jQuery counterparts it does expose you to any browser compatibly issues that may arise from their use. this.value and such is unlikely to cause problems but other similar attributes / functions may well not work in all browsers. Using a framework like jQuery means you dont have to deal with, or worry about, such things.
I would only ever use plain Javascript if performance is an issue i.e. you have a lot of tight loops and repeated operations.
I would recommend using the DOM properties wherever possible. Nearly all of them will cause no problem, performance will improve and you become less reliant on jQuery. For properties like checked, for example, you're much better off forgetting all about jQuery, which only serves to add confusion to a simple task.
If you're in any doubt for a particular property, you could have a look through the jQuery source to see whether it has any special handling for that property and view it as a learning exercise.
While many people reject such claims, I have also observed that avoiding/minimizing the jQuery usage can yield significantly faster scripts. Avoid repeated/unnecessary $() in particular; instead try to do things once e.g. a = $(a);
Things that I have noticed as being quite costly are in particular $(e).css({a:b}).
Google's optimizing Closure Compiler supposedly can inline such simple functions, too!
And in fact, it comes with a rather large library (closure library) that offers most of the cross-browser compatibility stuff without introducing an entirely new notion.
It takes a bit to get used to the closure way of exporting variables and functions (so they don't get renamed!) in full optimization mode. But at least in my cases, the generated code was quite good and small, and I bet it has received some further improvements since.
https://developers.google.com/closure/compiler/
How much overhead is there when use functions that have a huge body?
For example, consider this piece of code:
(function() {
// 25k lines
})();
How may it affect loading speed / memory consumption?
To be honest I'm not sure, the good way to help answer your question is to measure.
You can use a javascript profiler, such as the one built into Google Chrome, here is a mini intro to the google chrome profiler
You can also use Firebug profiler() and time(): http://www.stoimen.com/blog/2010/02/02/profiling-javascript-with-firebug-console-profile-console-time/
Overhead is negligible on a static function declaration regardless of size. The only performance loss comes from what is defined inside the function.
Yes, you will have large closures that contain many variables, but unless your declaring several tens of thousands of private variables in the function, or executing that function tens of thousands of times, then you won't notice a difference.
The real question here is, if you split that function up into multiple smaller functions, would you notice a performance increase? The answer is no, you should actually see a slight performance decrease with more overhead, although your memory allocation should at least be able to collect some unused variables.
Either way, javascript is most often only bogged down by obviously expensive tasks, so I wouldn't bother optimizing until you see a problem.
Well that's almost impossible to answer.
If you realy want to understand about memory usage, automatic garbage colection and other nitty gritty of closure, start here: http://jibbering.com/faq/notes/closures/
Firstly, products like JQuery are built on using closures extremely heavily. JQuery is considered to be a very high performance piece of Javascript code. This should tell you a lot about the coding techniques it uses.
The exact performance of any given feature is going to vary between different browsers, as they all have their own scripting engines, which are all independantly written and have different optimisations. But one thing they will all have done is tried to give the best optimisations to the most commonly used Javascript features. Given the prevelance of JQuery and its like, you can bet that closures are very heavily optimised.
And in any case, with the latest round of browser releases, their scripting engines are all now sufficiently high performance that you'd be hard pushed to find anything in the basic language constructs which consitutes a significant performance issue.
This is more of a question of style and preference than anything, though it's possible that there might be performance considerations as well.
If you're using a framework (say jQuery for the sake of argument, though it could be any framework) and you need to write a new function. It's a simple function, and you could easily accomplish it without using the framework.
Is there an advantage to using the framework anyway, because it's already loaded in the browser's memory, has a readily-accessible map of the DOM, etc.? Or will plain-vanilla js always parse faster because it's "raw" and doesn't depend on the framework?
Or is it simply a matter of taste?
The answer is going to depend greatly on what you're working to accomplish. In general, you're guaranteed at least a minor performance penalty for function overhead if you use a framework to achieve something that can be accomplished using "vanilla" JavaScript. This performance penalty is typically nominal and can be disregarded when taking other advantages of your framework into mind (speed of development, cleaner code, ease of maintenance, reusable code, etc).
If you absolutely have to have the most efficient code possible then you should try to write pure JavaScript that's highly optimized. If, like in most real world scenarios, you're not concerned about a handful of milliseconds in performance difference, stick with your Framework to maintain consistency.
There's always something to learn when you're solving problems with pure JS as opposed to having external code do it for you. In the long run, it's more maintainable because it's your code. It's not going to change. You know what it does. That's where the value of solving your own problems really comes into play. If you do your research on MDC, MSDN, and the ECMAScript spec, cross-browser scripting becomes a lot easier to process. Sure, Microsoft has their own ideas and their own DOM, but that's where the fun (read: challenge) is.
Cross-browser scripting in pure JS really heightens your problem-solving ability along with your understanding of the language. If there still are things that confound you, then jQuery can come into the mix and bridge the mental gap, so to speak. It's great to drive around in a luxury vehicle, but what use is it if you don't know how to change a tire when it goes flat? The best jQuery devs are the ones that know JavaScript well and know when to use jQuery, and when to use plain JS.
Sometimes, you just have to roll up your sleeves and do some hard work. There isn't a jQuery plugin for everything, and jQuery can't hide you from all the quirks that various browsers have to offer. Getting the job done with your own code is very rewarding, even if you had to sweat it out to make it work.
It's perfectly acceptable to use many different tools to complete a singular task. You just need to know when and where to use them.
From my understanding of jQuery it doesn't actually maintain a map of the dom in Memory and just has cross browser methods for walking the dom. Somethings will natually be faster in some browsers over others (such as a class based selector in Firefox will be faster than in IE because IE doesn't have a built in function for getElementsByClassName and Firefox does). If you don't need the frameworks methods for doing things I would say go ahead and use the native JS as that is generally what you chosen framework will use.
I would say do it with the framework, just because it will bring consistency inside the project. If you are using the framework everywhere even in small function, it will be easier to maintain.
As for the other factor it really depends on what you are trying to do.
I've been working on a javascript-heavy project. I've found that almost every time I had a cross-browser bug in my code, it was in a place where I had code like this:
var element = $(selector);
// lots of code ...
element[0].someVanillaOperation();
and that vanilla wasn't exactly the same across all browsers. What I love about jQuery is that (most of the time) it hides the browser differences and its functions work the same across them all.
If you're selecting elements by ID then plain Javascript is faster. It doesn't, however, provide any of the selection niceties that you get with jQuery - selecting multiple elements by class in a single call, for example.
Take a look at this link: http://www.webkit.org/perf/slickspeed/ which runs a speed test. It's an older version of jQuery, but the results in terms of raw speed speak for themselves.
Personally, I tend to use jQuery for everything - it keeps the code cleaner and the fact it pretty much dispenses with cross-browser JS support issues is worth any performance overhead in my book.
When using a javascript framework such as jquery, is there a realy chance of overusing the library for things that can be done simple using plain old javascript.
If so, then does this type of thing:
A: Slow code down
B: Make code less portable
C: Make the programmer less knwoledgeable about what is actually going on underneath everything
Im thinking of things like using jquery .each instead of a simple for loop. Sure this adds a bit of code but then its 'real' javascript if you get what i mean.
Maybe im just being naive.
Well, I suppose there's a chance, but in general the advantages far outweight the disadvantages.
In general
a) it may slow code down slightly if you're doing something that wopuld be simple in pure JS, but in most cases that';s been optimized in jQuery anyway. On the other hand, the naive way you'd do much of anything complicated is probably not as fast as Reisig et al will have done it.
b) It certainly makes code less portable in the sense that it's going to depend on the jQuery libraries. On the other hand, it will be more portable across browsers and versions, which is the more important consideration.
c) yes, it may conceal some of the javascript magic. My experience, however, is that you eventually have to learn it anyway; in the mean time jQuery makes you much more productive, much faster.
(Note, also, that these points actually apply to most libraries. jQuery is my favorite, but I write a lot with dojo, and have used prototype, scriptaculous, and YUI happily.)
B) it makes code more portable, not less, because differences between browsers are handled by the framework implementation.
AS for slowness, I would think that the initial load of jQuery would tackle most of your functionality. If you are loading 20 plugins then you may run into some issues.
jQuery is no less protable than any other JS file. Even less so if you are using a CDN.
I do tend to agree with the third point. I tend to not fuss with much actual JS anymore I just use jQuery to do everything, within reason.
Overall, I think jQuery and other JS lib's are one of the best things to happen to web development in the last bit.
The really great thing about jquery is they have already come up with some quick, smooth code that helps to protect you from cross-browser hazards. So, I am sure anything can be abused, but the benefits of knowing that my code is more likely to keep up with future browser changes simply by updating jquery's API without worrying about your my outdated javascript code gives me a little more peace-of-mind. Is it perfect? Noooooo. But right now, it makes my life sooooo much easier both now and in the foreseable future. If you write "raw" javascript-only code, then if one single browser changes the way they handle your situation, then that is one less segment of users that can efficiently view your site.
I figure if I'm going to load a library on a page, I may as well use it as much as I can. I try to get the bang for my buck (so to speak).
Of course, like any "new" technology, it is overused.
It's the same for things like Linq or CSS adapters for .NET.
The rule is if you can make it simple and efficient, do it!
My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application.
I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken processing inside the framework.
Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well.
I noticed things like the 'each' command in jQuery takes quite a lot of processing time.
Have others experienced some extra latency using JS frameworks?
How do I minimize their effect on page performance?
Are there best practices on implementation using JS frameworks?
Thanks
My personal take is to use the framework methods and tools where they make sense and make life easier, for example selectors and solving cross-browser quirks, and to use plain old vanilla JavaScript where there is no need to use the framework methods, for example, in simple loops.
I would check and double check the code that you have that uses the framework to ensure that it will perform as well as it can; it is all too easy to use a framework in a poor performing fashion and sometimes one doesn't discover this until one profiles it :)
Frameworks do introduce extra latency as there are usually a number of functions that are executed as a result of using the entry point function into using them.
EDIT:
Some general points with regards to using jQuery:
1.cache your jQuery objects in local variables if you are going to use them more than once. Querying the DOM is a relatively expensive operation and therefore should be done as little as is needed. If you have related selectors, take a look at performing a wide selection and then using the methods such as find(), filter() next(), prev() etc to filter the collection to get the relevant elements that you would have usedanother selector function to get.
2.Inside of functions, don't wrap objects in jQuery objects unneccessarily. If there is a cross browser way of accessing an object property value that is reliable, then use that. For example, the value property of a text input HTMLElement
$('input:text').each(function() {
// use
this.value
// don't worry about this
$(this).val();
});
3.Try to avoid adding large script files where you're using only a small piece of the functionality. There can be a lot of time spent parsing and executing code on page load that you're never going to use! If possible, extract only the relevant code that is needed. I appreciate that this can be hard and is not always possible, particularly when it comes to versioning, but it's worth pointing out nonetheless.