Is there any benefit to performance when I do the following in Mootools (or any framework, really)?:
var elem = $('#elemId');
elem.addClass('someClass');
elem.set('some attribute', 'some value');
etc, etc. Basically, I'm updating certain elements a lot on the DOM and I was wondering if creating a variable in memory and using that when needed was better than:
$('#elemId').addClass('someClass');
$('#elemId').set('some attribute', 'some value');
The changes to $('#elemId') are all over the place, in various different functions.
Spencer ,
This is called caching and it is one of the best practices.
when you say
$('#elemId');
It will go and query the DOM everytime , so if you say
var elem = $('#elemId');
elem acts as a cache element and improves performance a lot.
This is manly useful in IE as it has memory leaks promblem and all
ready this document which is really good
http://net.tutsplus.com/tutorials/javascript-ajax/14-helpful-jquery-tricks-notes-and-best-practices/
It depends how you query the dom. Lookups by ID are extremely fast. Second most is css classes. So as long as you're doing it by only a single ID (not a complex selector containing an id), there shouldn't be much of a benefit. However, if you're using any other selector, caching is the way to go.
http://code.google.com/speed/page-speed/docs/rendering.html#UseEfficientCSSSelectors
https://developer.mozilla.org/en/Writing_Efficient_CSS
You first approach is faster then your second approach, because you "cache" the search on #elemId.
Meaning the calls to addClass and set don't require extra lookups in the DOM for your element.
However! You can link function calls:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
Depending on your application caching or linking might work better, but definitely not identical sequential lookups in the same block.
Depending on the situation, caching can be as much as 99% faster then using a jQuery object every time. In the case you presented it will not make much difference. if you plan to use the selector many times, you should definitely cache the object as a variable so it doesn't get created everytime you run it.
A similar questions was answered at Does using $this instead of $(this) provide a performance enhancement?.
Check performance log http://jsperf.com/jquery-this-vs-this
You are considering using a local variable to cache a value of a potentially slow lookup.
How slow is the call itself? If it's fast, caching won't make much a difference. If it's slow, then cache at the first call. Selectors vary significantly in their cost-- just think about how the code must fine the element. If it's an ID, then the browser provides fast access, whereas classes and nodes my require full DOM scans. Check out profiling of jQuery (Sizzle) selectors to get a sense of these.
Can you chain the calls? Consider "chaining" method calls where possible. This provides the efficiency without introducing another variable.
For your example, I'd write:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
How does the code read? Usually if the same method is going to be called multiple times, it is clearer to DRY it up, and use a local variable. The reader then understands the intent better-- you don't force them to scan all the jQuery calls to verify that they are the same. BTW, a fairly standard convention is to name jQuery variables starting with a $-- which is legal in Javascript-- as in
var $elem = $('#elem');
$elem.addClass('someClass');
Hope this helps.
Related
I mostly call jQuery elements by the id of the DOM object using the $('#id') syntax. I think that this goes through the selector algorithm, and spends some time on that process.
I was wondering if there is a way to dig into that function and access jQuery objects from the id of the DOM in a more direct way (at the lower level) to improve performance. Will it worth doing so? If so, how can I do it?
If you want to improve performance, keep a jQuery object that's used for single node synchronous operations
var _id = $(document);
Then update its 0 property when you need to fetch by ID.
_id[0] = document.getElementById("id");
_id.css(...);
This eliminates the vast majority of its overhead. Because JS is single threaded, you can reuse this object wherever needed. But if you want to do some operation in a setTimeout(), you should probably just create a new object to be safe.
When you pass jQuery a single, lone ID selector, it automatically skips any selector processing and calls document.getElementById() directly. It's only when the selector string consists of anything more than an ID selector that it always treats it like a selector (this includes selectors like div#id, .class#id, or even #wrapper #id).
In most cases, this is something you don't have to worry about, since jQuery already does a lot of optimization for you. But if you're paranoid and want to remove every hair and every ounce of any possible overhead, see cookie monster's answer.
Going native will always be faster / more direct:
var id = document.getElementById("id");
But you will loose the benefits of wrapping your object in a jQuery wrapper.
You can also do :
$(document.getElementById("id"))
That way you are passing in the direct object rather than performing a lookup on the dom.
You want to use getElementById like this
var name = document.getElementById('id');
Actually.... this will depend on the browser u will be using to run the script.... here i did some dig on the internet..... i found a test result on this topic.....
Here is the Site
As to them JQuery selectors are faster than the native JS code.
I am building large website with heavy JavaScript usage, all of my content is being loaded trough ajax, it is very similar to Facebook, and since there is a lot of different pages I need a lot of JavaScript, so what I thought of is to divide my script in to sections, each page would have it's own script file.
Now loading is simple, I just load a new file with each page, but my concern is what will happen if user goes trough 100 different pages and load 100 different script files?
At the moment my website doesn't have that many pages, but I'm quite sure it will grow to nearly 100 unique pages at some point in the future.
So what would happen to the user with slower computer? I'm guessing it would start to slow down a lot since there would be no refresh. From what I have read it is impossible to just unload all events and data from loaded script file in any easy way, and if I were to try that it might cost me a way to much time and effort to do that.
So my question would be, should I just leave it the way it is or try to do something about it? I am currently using jQuery with few plugins, and if I had to guess average file would be around 50-200 lines of code with mostly click events, and ajax calls.
Note, each page objects has it's own prefix for each class, for example: home_header, login_header
So there shouldn't be any conflicts between onClick event listeners and similar things.
Just because you are using AJAX doesn't automatically mean alarm bells with regards to memory usage... you should be more worried about the kind of things that cause memory leaks, and making sure you destruct as well as construct things properly:
http://javascript.crockford.com/memory/leak.html
http://nesj.net/blog/2012/04/javascript-memory-leaks/
http://www.ibm.com/developerworks/web/library/wa-memleak/
What is JavaScript garbage collection?
As a rule, in any large system I tend to create a helper constructor that keeps track of all the items I may wish to destroy at a later date or on page unload (event listeners, large attributes or object structures) all indexed by a namespace. Then when I've finished with a particular section or entity I ask the helper system - I call it GarbageMonkey :) - to clear a particular namespace.
For events it unbinds
For attributes it unsets
For arrays / objects it scans and unsets each key and can do so for sub elements too
For elements it removes and cleans content as much as possible
Obviously for the above to work you need to be wary about leaving variables lying around that can keep a reference to the data you hope to delete. So this means being aware of what garbage collection is, what closures are; and how between them they can keep a variable alive forever!! ..or at least until the browser/tab is destroyed. It also means using object structures rather than vars because you can delete keys in any scope that has access to the object, but you cannot do so for vars.
So do this:
var data = {}, methods = {}, events = {};
methods.aTestMethod = function(){
/// by assigning properties to an object, you can easily remove them later
data.value1 = 123;
data.value2 = 456;
data.value3 = 789;
}
Instead of this:
var value1, value2, value3;
var aTestMethod = function(){
value1 = 123;
value2 = 456;
value3 = 789;
}
The reason being because in the above you can later do this:
var i;
for( i in methods ){ delete methods[i]; }
for( i in data ){ delete data[i]; }
But you can't do this:
delete value1;
delete value2;
delete value3;
Now obviously the above wont protect you from a reference that points directly to a sub element of either methods or data. But if you only pass the methods and data objects around in your code, and keep tidy with regards to attaching methods as event listeners, then even if you do end up with a rogue reference it should only point to an empty object (after you've deleted it's contents that is).
If you recycle variables and don't pollute the global scope, you're on the right track; but as for your question, you should first find out if it is a practical concern.
This can be checked and monitored with a profiler - out-of-the box Chrome is pretty decent for it, just type about:memory in the URL and it'll give you a per-tab breakdown and would even let you compare memory usage between browsers. If you have some automated test scenarios set up (or are willing to navigate through 100 pages manually), such profiling will tell you if there's something majorly wrong with your website.
There is 2 different thing to take care of :
-memory usage
-memory leaks
For long-running webapps, memory leaks should be absolutely avoided, otherwise users will experience browser crashes. To monitor memory use, you can download process explorer :
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Disable all your browser plugins, and then use your app, and perform repetitive tasks. If memory use raises, you got leaks. IE7 - IE8 do leak much more easily than modern browsers, and are much harder to debug, so it is useful to know what is you minim browser compatibility.
For memory usage, a few thing can help decrease the weight of your app :
instead of looping through dom elements and attaching event handlers functions, use event delegation. ED is really a golden gun here.
for IE 7 / 8 var nullification was necessary and I think it still helps for modern browsers(needs some testing). To do so, you need also to name your functions, so that you can remove them from memory. (see pebbl answers for more details on this)
keep control of the dom size. When nodes are added for a feature, they should also be removed when this feature is not used anymore.
add to all of your components some teardown() method that handles unloading.
ok, sorry I am a bit too quick here, but it would be nice to know :
what is your minimum browser
if you have detected leaks
if ED is a sufficient solution (often it is)
I'm building one of my first web apps using HTML5, specifically targeting iPhones.
Since I'm pretty new to this, I'm trying to develop some good coding habits, follow best practices, optimize performance, and minimize the load on the resource-constrained iPhone.
One of the things I need to do frequently... I have numerous divs (each of which has a unique id) that I'm frequently updating (e.g., with innerHTML), or modifying (e.g., style attributes with webkit transitions and transforms).
In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div I access in "global" variables at the start?
(I use "global" in quotes because I've really just got one truly global variable - it's an object that stores all my "global" variables as properties).
I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is.
Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that.
So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?
Many thanks in advance!
"In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div"
When you're calling getElementById, you're asking it to perform a task. If you don't expect a different result when calling the same method with the same argument, then it would seem to make sense to cache the result.
"I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is."
In modern browsers especially, it's very fast, but not as fast as looking up a property on your global object.
"Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that."
Trivial. It's just a pointer to an object that already exists. If you remove the element from the DOM with no intention to use it again, then of course you'll want to release your hold on it.
"So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?"
Depends entirely on the situation. If you're only fetching it a couple times, then you may not find it worthwhile to add to your global object. The more you need to fetch the same element, the more sense it makes to cache it.
Here's a jsPerf test to compare. Of course size of your DOM as well as length of variable scope traversal and the number/depth of properties in your global object will play some role.
Using a local variable or even an object property is much faster than getElementById(). However, both are so fast that their performance is generally irrelevant compared to any other operation you might do once you have the element. Event setting a single property on the element is orders of magnitude slower than retrieving it by either method.
So the main reason to cache an element is to avoid the rather long-winded document.getElementById(... syntax, or to avoid having element ID strings scattered all over your code.
I've been using jQuery a long time and I've been writing a slideshow plugin for my work and I (not 100% consciously) wrote probably 75% it in a single chain. It's fully commented and i specify each end() and what it's resetting it to, etc, but does this slow down jQuery or the DOM loading, or, does this actually speed it up?
It depends on your specific code, as always. As for storing a reference vs .end(), well...with a really long chain, it's faster not to chain vs .end() calls, simply because you have to handle the extra baggage (storing/restoring), like the .prevObject reference, the .selector, .context, etc that you probably don't care about in many cases....and just more intertwined references to previous objects.
Where it's more costly is harder to measure...it's not the execution (though that is slower, even if infinitesimally)...it's the more complicated garbage collection to clean up all those objects later, since the dependency graph is now much larger.
Now...will it make a measurable difference? not unless your chain is really long, in which case it's probably a micro-optimization you need not worry about in most cases.
99% of the time, unless you're making some egregious performance penalizing call, don't worry about it, as with most micro-optimizations. If you're having a problem with performance, then get into it.
One of the most expensive things you can do in a modern browser is to access and manipulate the DOM. Chaining lets you minimize the actual lookups that you have to do, which can mean significantly faster code. The other option is to do the initial lookup, store that in a variable, and do everything off of that variable. That being said, jquery was specifically designed with that chaining api in mind, so it is more idiomatic to chain.
I think chainability of jQuery is a great feature ... one should really use it more often.
for example:
$(this)
.find('.funky')
.css('width', 30)
.attr('title', 'Funky Title')
.end()
.fadeIn();
is much better (and elegant) - don't have to create 2 jQuery $(this) objects than :
$(this).find('.funky').css('width', 30).attr('title', 'Funky Title');
$(this).fadeIn();
My guess would be no difference, or faster, due to lack of intermediaries.
The only major drawback is to clarity, if you think via comments that it is obvious without making it multi-line with intermediate variables, via virtue of comments or just a nicely clean call chain then fine.
I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.:
Prototype's $('id').update(content)
jQuery's jQuery('#id').html(content)
EDIT: My real concerns are clarified at the end of the question.
BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$".
I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this:
var html = [];
int idx = 0;
...
html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction(';
html[idx++] = param;
html[idx++] = ')"></span>';
html[idx++] = someText;
html[idx++] = '</td></tr>';
...
So here comes the question, which is better to use:
// Prototype's
$('myId').update(html.join(''));
// or jQuery's
jQuery('#myId').html(html.join(''));
Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values.
Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this:
MyData = {div1:jQuery('#id1'), div2:$('id2'), ...};
...
div1.update('content 1');
div2.html('content 2');
So, which is the best practice?
EDIT: Clarifying, I'm mostly concerned about:
Memory usage by these saved objects (it seems to me that jQuery objects add too much overhead), while OTOH my DOM elements are already modified by Prototype's extensions (loaded by default by Richfaces).
Performance (time) and memory leakage (garbage collection for replaced elements?) when updating the DOM. From the source code, I could see that Prototype replaces the innerHTML and does something with inline scripts. jQuery seems to free memory when calling "empty()" before replacing content.
Please correct me if needed...
You're better off going with jQuery. Both frameworks are great (and I use them both), but in your case jQuery is probably the winner.
In my opinion prototype provides a more natural syntax for javascript development. It does this at the cost of adding methods to some of the core classes, but it's also motivated by ruby where this is the norm.
jQuery on the other hand is far superior at dom manipulation and event observation. Your code will be more concise and manageable in these cases and will benefit from great performance features like event delegation. jQuery also has a much larger community and way more plugins and code samples.
If you're only interested in the three basic methods "update", "hide" and "show" then jQuery is better suited. It is aimed more at DOM manipulation which is exactly what you need. Then again you could do each of those things in a couple of lines of code, saving the 26KB needed to transfer the jQuery library.
Since your worry is in memory usage look at the jQuery file, it is 77KB uncompressed, how much work do you suppose that is for the browser to execute? Probably much more than that freed by calling empty() on a typical DIV.
And you mention Prototype is already in use on the site in which case you shouldn't be adding another library. jQuery's abilities are a subset of Prototype's.
This question is nearly a year old so you've probably made your decision by now. For anyone else reading the answer is simple; If something's not broken don't fix it, you already have one capable library installed so use that.
I would go for jQuery. Prototype used to modify default JS objects, which is fine but it means you have to be careful. I believe this is no longer the case though. JQuery also has a large plugin repository and the jquery UI extension for widgets. Btw. With JQuery you can use the familiar dollar sign as well.