If I have a node like this:
<!-- lots of HTML here (including lots of nest unordered lists) -->
<span id="spn123">Test</span>
<!-- lots of html here -->
Burried under lots of levels of HTML and I also have a JavaScript hash table like this:
[
/* lots of nodes here */
{ 123 : 'Test' }
/* lots of nodes here */
]
Then would it be faster to use
document.getElementById('spn123').innerText;
or
data(123)
I would assume that both are handled by the browser natively and are very optimized, but would the latter be faster due to its flat structure and lack of additional DOM bloat?
Thanks!
It might be useful to know why you have both an object and a DOM representation. I assume hash is the single source of truth and the DOM renders the hash accordingly.
I would expect the hash lookup to be faster because the DOM can be slow, however this is mostly writing the DOM, not reading the DOM. I made a JS Perf and the results vary drastically based on the browser.
In these scenarios, I like to make a function to abstract the implementation so it can be changed later without finding every reference to the lookup in the code:
function getItem(id) {
return hash[id];
}
If you later find a more optimized way to lookup your data, you just change that function.
Update:
I changed the jsperf to use textContent but this will not work in older browsers.
I think that the answer to your question is, "It doesn't matter."
My thought is that as the commenters suggested, you might want to benchmark it somehow.
However, if I'm not mistaken, document.getElementById is the fastest method there is to select a single element on your page; it shouldn't be necessary to try to optimize it unless the number of operations you're performing is extreme.
but would the latter be faster due to its flat structure and lack of additional DOM bloat?
Unless one is significantly and noticeably slower than the other, use the one that makes the most sense. Objects are meant for data storage and retrieval (among other things), the DOM is meant to present it visually.
So use the Object for data storage and retrieval, use the DOM to display it in a meaningful way.
Related
JavaScript lets you add arbitrary properties and methods to any object, including DOM nodes. Assuming the property was well namespaced (something like _myLib_propertyName) so that it would be unlikely to create a conflict, are there any good reasons not to stash data in DOM nodes?
Are there any good use cases for doing so?
I imagine doing this frequently could contribute to a sloppy coding style or code that is confusing or counter-intuitive, but it seems like there would also be times when tucking "custom" properties into DOM nodes would be an effective and expedient technique.
No, it is usually a bad idea to store your own properties on DOM nodes.
DOM nodes are host objects and host objects can do what they like. Specifically, there is no requirement in the ECMAScript spec for host objects to allow this kind of extension, so browsers are not obliged to allow it. In particular, new browsers may choose not to, and existing code relying on it will break.
Not all host objects in existing browsers allow it. For example, text nodes and all ActiveX objects (such as XMLHttpRequest and XMLDOM objects, used for parsing XML) in IE do not, and failure behaviour varies from throwing errors to silent failure.
In IE, the ability to add properties can be switched off for a whole document, including all nodes within the document, with the line document.expando = false;. Thus if any of the code in your page includes this line, all code relying on adding properties to DOM nodes will fail.
I think more than anything, the best reason not to store data in the DOM is because the DOM is meant to represent content structure and styling for an HTML page. While you could surely add data to the nodes in a well-namespaced manner as to avoid conflicts, you're introducing data state into the visual representation.
I'm trying to find an example for or against storing data in the DOM, and scratching my head to find a convincing case. In the past, I've found that the separation of the data state from the visual state has saved headache during re-designs of sites I've worked on.
If you look at HTML 5 there is the data attribute for fields that will allow you to store information for the field.
Don't be surprised to run into trouble with IE 6 & 7. They are very inconsistent with setAttribute vs set as a property.
And if you're not careful you could set circular references and give yourself a memory leak.
http://www.ibm.com/developerworks/web/library/wa-memleak/
I always set node properties as a last resort, and when I do I'm extra careful with my code and test more heavily than usual.
For what it's worth, the D3.js library makes extensive use of custom properties on DOM nodes. In particular for the following packages that introduce interaction behaviors:
https://github.com/d3/d3-zoom - the __zoom property
https://github.com/d3/d3-brush - the __brush property
D3 also provides a mechanism for putting your own custom properties on DOM elements called d3.local. The point of that utility is to generate property names that are guaranteed not to conflict with the DOM API.
The primary risk I can see associated with custom properties on DOM nodes is accidentally picking a name that already has some meaning in the DOM API. If you use d3.local or maybe prefix everything with __, that should avoid conflicts.
This is more a question about best practices. When trying to apply MVC-like design pattern in a web application, I often find myself wonder how I should go about updating the View.
For example, if I am to update View_1 who has X number of elements. Is it better to:
A: iterate through each of the X elements, figuring out which ones need to be updated and apply the DOM change at a very fine granularity.
or
B: using the data supplied by Model or some other data structure to regenerate the markup for this entire View and all its enclosing elements, and replace the root element of View_1 in one DOM manipulation?
Correct me if I am mistaken. I heard that rendering engines are usually more efficient at replacing at large chunk of the DOM in one go, than multiple smaller DOM operations. If that is the case then approach B is superior. However, even using template engines, I still sometimes find it difficult to avoid rewrite markups for parts of the view that aren't changed.
I looked into the source code for project Bespin before the renamed it. I distinctly remember that they implemented some sort of rendering loop mechanism where DOM operations are queued and applied in fixed time intervals, much like how games manage their frames. This is similar to Approach A. I can also see the rationales behind this approach. The small DOM operations applied in such manner keeps the UI responsive (especially important for a web text editor). Also this way, the application can be made more efficient by only update the elements that needs to be changed. Static text and aesthetic elements can remain untouched.
Those are my arguments for both sides. What do you guys think? Are we looking for a happy medium somewhere, or one approach is by and large superior?
Also, are there any good books/papers/sites on this particular topic?
(let's assume the web app in question is interaction heavy with many dynamic updates)
It's true that rendering engines usually handle change in large chunks faster than multiple small changes.
tl;dr: The bespin way would be ideal, and if you can, do it in a worker.
Depending on the size of the amount of changes you might want to try to start a worker and do the calculation of the changes inside the worker since long running JS lock up the UI. You might consider using the following flow:
Create a object with a part of the dom tree and also the parent id.
Stringify the object to JSON
Start a worker
Pass in the stringified object into the worker
Receive and parse the string.
Work on changing all the necessary parts of the dom tree that you passed in.
Stringify the object again.
Pass the object back to the main thread.
Parse and extract new dom tree.
Insert into the dom again.
This will be faster if there are many changes and a small tree. If it's a big tree and few changes just doing the changes locally in a copy of the real DOM tree will be faster and then updating the DOM in one go.
Also read googles sites about page speed:
https://code.google.com/speed/articles/
And especially this article:
https://code.google.com/speed/articles/javascript-dom.html
After 3 years of working with various web technologies, I think I finally found a great balance between the two approach: virtual DOM
Libraries like vdom, Elm, mithril.js, and to some extend Facebook React track a thin abstraction of the actual DOM, figures out what needs to be changed, and tries to apply the smallest possible DOM change.
e.g.
https://github.com/Matt-Esch/virtual-dom/tree/master/vdom
I'm building one of my first web apps using HTML5, specifically targeting iPhones.
Since I'm pretty new to this, I'm trying to develop some good coding habits, follow best practices, optimize performance, and minimize the load on the resource-constrained iPhone.
One of the things I need to do frequently... I have numerous divs (each of which has a unique id) that I'm frequently updating (e.g., with innerHTML), or modifying (e.g., style attributes with webkit transitions and transforms).
In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div I access in "global" variables at the start?
(I use "global" in quotes because I've really just got one truly global variable - it's an object that stores all my "global" variables as properties).
I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is.
Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that.
So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?
Many thanks in advance!
"In general - am I better off using getElementByID each time I need a handle to a div, or should I store references to each div"
When you're calling getElementById, you're asking it to perform a task. If you don't expect a different result when calling the same method with the same argument, then it would seem to make sense to cache the result.
"I assume using getElementByID each time must have some overhead, since the function needs to traverse the DOM to find the div. But, I'm not sure how taxing or efficient this function is."
In modern browsers especially, it's very fast, but not as fast as looking up a property on your global object.
"Using global variables to store handles to each element must consume some memory, but I don't know if these references require just a trivial amount of RAM, or more than that."
Trivial. It's just a pointer to an object that already exists. If you remove the element from the DOM with no intention to use it again, then of course you'll want to release your hold on it.
"So - which is better? Or, do both options consume such a trivial amount of resources that I should just worry about which produces more readable, maintainable code?"
Depends entirely on the situation. If you're only fetching it a couple times, then you may not find it worthwhile to add to your global object. The more you need to fetch the same element, the more sense it makes to cache it.
Here's a jsPerf test to compare. Of course size of your DOM as well as length of variable scope traversal and the number/depth of properties in your global object will play some role.
Using a local variable or even an object property is much faster than getElementById(). However, both are so fast that their performance is generally irrelevant compared to any other operation you might do once you have the element. Event setting a single property on the element is orders of magnitude slower than retrieving it by either method.
So the main reason to cache an element is to avoid the rather long-winded document.getElementById(... syntax, or to avoid having element ID strings scattered all over your code.
Is there any benefit to performance when I do the following in Mootools (or any framework, really)?:
var elem = $('#elemId');
elem.addClass('someClass');
elem.set('some attribute', 'some value');
etc, etc. Basically, I'm updating certain elements a lot on the DOM and I was wondering if creating a variable in memory and using that when needed was better than:
$('#elemId').addClass('someClass');
$('#elemId').set('some attribute', 'some value');
The changes to $('#elemId') are all over the place, in various different functions.
Spencer ,
This is called caching and it is one of the best practices.
when you say
$('#elemId');
It will go and query the DOM everytime , so if you say
var elem = $('#elemId');
elem acts as a cache element and improves performance a lot.
This is manly useful in IE as it has memory leaks promblem and all
ready this document which is really good
http://net.tutsplus.com/tutorials/javascript-ajax/14-helpful-jquery-tricks-notes-and-best-practices/
It depends how you query the dom. Lookups by ID are extremely fast. Second most is css classes. So as long as you're doing it by only a single ID (not a complex selector containing an id), there shouldn't be much of a benefit. However, if you're using any other selector, caching is the way to go.
http://code.google.com/speed/page-speed/docs/rendering.html#UseEfficientCSSSelectors
https://developer.mozilla.org/en/Writing_Efficient_CSS
You first approach is faster then your second approach, because you "cache" the search on #elemId.
Meaning the calls to addClass and set don't require extra lookups in the DOM for your element.
However! You can link function calls:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
Depending on your application caching or linking might work better, but definitely not identical sequential lookups in the same block.
Depending on the situation, caching can be as much as 99% faster then using a jQuery object every time. In the case you presented it will not make much difference. if you plan to use the selector many times, you should definitely cache the object as a variable so it doesn't get created everytime you run it.
A similar questions was answered at Does using $this instead of $(this) provide a performance enhancement?.
Check performance log http://jsperf.com/jquery-this-vs-this
You are considering using a local variable to cache a value of a potentially slow lookup.
How slow is the call itself? If it's fast, caching won't make much a difference. If it's slow, then cache at the first call. Selectors vary significantly in their cost-- just think about how the code must fine the element. If it's an ID, then the browser provides fast access, whereas classes and nodes my require full DOM scans. Check out profiling of jQuery (Sizzle) selectors to get a sense of these.
Can you chain the calls? Consider "chaining" method calls where possible. This provides the efficiency without introducing another variable.
For your example, I'd write:
$('#elemId').addClass('someClass').set('some attribute', 'some value');
How does the code read? Usually if the same method is going to be called multiple times, it is clearer to DRY it up, and use a local variable. The reader then understands the intent better-- you don't force them to scan all the jQuery calls to verify that they are the same. BTW, a fairly standard convention is to name jQuery variables starting with a $-- which is legal in Javascript-- as in
var $elem = $('#elem');
$elem.addClass('someClass');
Hope this helps.
I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.:
Prototype's $('id').update(content)
jQuery's jQuery('#id').html(content)
EDIT: My real concerns are clarified at the end of the question.
BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$".
I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this:
var html = [];
int idx = 0;
...
html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction(';
html[idx++] = param;
html[idx++] = ')"></span>';
html[idx++] = someText;
html[idx++] = '</td></tr>';
...
So here comes the question, which is better to use:
// Prototype's
$('myId').update(html.join(''));
// or jQuery's
jQuery('#myId').html(html.join(''));
Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values.
Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this:
MyData = {div1:jQuery('#id1'), div2:$('id2'), ...};
...
div1.update('content 1');
div2.html('content 2');
So, which is the best practice?
EDIT: Clarifying, I'm mostly concerned about:
Memory usage by these saved objects (it seems to me that jQuery objects add too much overhead), while OTOH my DOM elements are already modified by Prototype's extensions (loaded by default by Richfaces).
Performance (time) and memory leakage (garbage collection for replaced elements?) when updating the DOM. From the source code, I could see that Prototype replaces the innerHTML and does something with inline scripts. jQuery seems to free memory when calling "empty()" before replacing content.
Please correct me if needed...
You're better off going with jQuery. Both frameworks are great (and I use them both), but in your case jQuery is probably the winner.
In my opinion prototype provides a more natural syntax for javascript development. It does this at the cost of adding methods to some of the core classes, but it's also motivated by ruby where this is the norm.
jQuery on the other hand is far superior at dom manipulation and event observation. Your code will be more concise and manageable in these cases and will benefit from great performance features like event delegation. jQuery also has a much larger community and way more plugins and code samples.
If you're only interested in the three basic methods "update", "hide" and "show" then jQuery is better suited. It is aimed more at DOM manipulation which is exactly what you need. Then again you could do each of those things in a couple of lines of code, saving the 26KB needed to transfer the jQuery library.
Since your worry is in memory usage look at the jQuery file, it is 77KB uncompressed, how much work do you suppose that is for the browser to execute? Probably much more than that freed by calling empty() on a typical DIV.
And you mention Prototype is already in use on the site in which case you shouldn't be adding another library. jQuery's abilities are a subset of Prototype's.
This question is nearly a year old so you've probably made your decision by now. For anyone else reading the answer is simple; If something's not broken don't fix it, you already have one capable library installed so use that.
I would go for jQuery. Prototype used to modify default JS objects, which is fine but it means you have to be careful. I believe this is no longer the case though. JQuery also has a large plugin repository and the jquery UI extension for widgets. Btw. With JQuery you can use the familiar dollar sign as well.