(I'm using prototype.js here, but I imagine the same holds true across other libraries as well)
I often find myself writing code like this:
var search_box;
Event.observe(window, 'load', function() {
search_box = $('search_box');
});
function doSomething(msg) {
search_box.innerHTML = msg;
}
Rather then writing it simply like this:
function doSomething(msg) {
$('search_box').innerHTML = msg;
}
My intention is to avoid having to traverse the entire DOM searching for the "search_box" element everything I need access to it. So I search for it once on page load and then stick the reference in a global variable. However, I don't recall ever seeing anyone else do this? Am I needlessly making my code more complex?
This is called premature optimization.
You are introducing a global variable to optimize something you have not profiled.
Your assumption that the $ "traverses the DOM" is incorrect. This function is implemented using document.getElementById which is the fastest way to access an element in the DOM.
I suggest coding your javascript using basic programming best practices such as avoiding global variables, and not optimizing without profiling. Once your application is working as expected, then you should profile it (using firebug) and address the area(s) where it is slow.
I usually do the same thing, the reason you don't see it often is probably because you don't see well written code often that's optimized ( nevermind the whole preoptimization is evil thing ) - I say if you can optimize it without headaches then why not?
Realistically speaking though that's a very very trivial DOM lookup, you should only begin to worry if you're iterating through dozens of elements and being vague in the selectors.. so I wouldn't worry too much about it unless you can really notice certain parts of your web page loading rather slowly, in which case you should store the multiple elements you access in the outer scope's variable.
Good:
(function() {
var els = $$('.foo span'); // also better to specify a context but I'm not sure how that's done in Prototype, it's the second param in jQuery.
function foo() {
els.something();
}
els.somethingElse();
})();
Bad:
(function() {
var els = $$('.foo span'); // also better to specify a context but I'm not sure how that's done in Prototype, it's the second param in jQuery.
function foo() {
$$('.foo span').something();
}
$$('.foo span').somethingElse();
})();
I decided to spend a bit of time doing some testing to get some hard data. The answer is that preloading the elements into global variables is twice as efficient as accessing them using the DOM getElementById method. (At least under FF 3.6).
Subsequent accesses to the objects is also more efficient using the global variable method, but only marginally so.
Related
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
I'm sure, this question has been answered somewhere before but I just couldn't find it.
If within a function a variable has been defined, what is the best practice to save it for later use? 1. Saving it "globally"?
foo = 'bar';...function bar(){
...
foo = 'bat';
return foo;
}...
Here, the variable will be altered later on.
2. Or saving it within a hidden form field within the HTML-DOM?
`Thanxs!
Saving it as a global JavaScript variable is by far the most efficient.
EDIT: If the data you want to save is associated with an element on the page (for example, each row in a table has a bit of data associated with it), and you are using jQuery, there is a data() method which is more efficient than setting an attribute on the element or something similar.
It depends on the context, but probably: In a variable defined at the top level of a closure that wraps the set of functions to which it applies.
i.e.
var exports = function () {
var stored_data;
function set_data(foo) {
stored_data = foo;
}
function get_data() {
return stored_data;
}
return { get: get_data, set: set_data };
}();
This avoids the risk of other scripts (or other parts of your own, potentially very large, script) overwriting it by accident.
The HTML5 spec has defined a solution to this question: If you are using HTML5, you can specify data attributes in your DOM.
See this page for more info: http://ejohn.org/blog/html-5-data-attributes/
This is now the standardised way of doing it, so I guess that it's considered best practice. Also John Resig, who wrote the blog I linked to above, is the author of JQuery, so if it's good enough for him, who am I to argue.
The really good news is that you don't even have to be using an HTML5-compatible browser for this technique to work - it already works in older browsers; it's just that now it's been encoded into the standard, and there's a defined way to do it.
That said, there's nothing wrong with a global variable in your Javascript as long as you avoid polluting the namespace too much, and it would be more efficient from a performance perspective, so there's plenty of merit in that approach as well.
I would really like to provide the user some scripting capabilities, while not giving it access to the more powerful features, like altering the DOM. That is, all input/output is tunneled thru a given interface. Like a kind of restricted javacsript.
Example:
If the interface is checkanswer(func)
this are allowed:
checkanswer( function (x,y)={
return x+y;
}
but these are not allowed:
alert(1)
document.write("hello world")
eval("alert()")
EDIT: what I had in mind was a simple language that was implemented using javascript, something like http://stevehanov.ca/blog/index.php?id=92
(Edit This answer relates to your pre-edit question. Don't know of any script languages implemented using Javascript, although I expect there are some. For instance, at one point someone wrote BASIC for Javascript (used to have a link, but it rotted). The remainder of this answer is therefore pretty academic, but I've left it just for discussion, illustration, and even cautionary purposes. Also, I definitely agree with bobince's points — don't do this yourself, use the work of others, such as Caja.)
If you allow any scripting in user-generated content, be ready for the fact you'll be entering an arms race of people finding holes in your protection mechanisms and exploiting them, and you responding to those exploits. I think I'd probably shy away from it, but you know your community and your options for dealing with abuse. So if you're prepared for that:
Because of the way that Javascript does symbol resolution, it seems like it should be possible to evaluate a script in a context where window, document, ActiveXObject, XMLHttpRequest, and similar don't have their usual meanings:
// Define the scoper
var Scoper = (function() {
var rv = {};
rv.scope = function(codeString) {
var window,
document,
ActiveXObject,
XMLHttpRequest,
alert,
setTimeout,
setInterval,
clearTimeout,
clearInterval,
Function,
arguments;
// etc., etc., etc.
// Just declaring `arguments` doesn't work (which makes
// sense, actually), but overwriting it does
arguments = undefined;
// Execute the code; still probably pretty unsafe!
eval(codeString);
};
return rv;;
})();
// Usage:
Scoper.scope(codeString);
(Now that uses the evil eval, but I can't immediately think of a way to shadow the default objects cross-browser without using eval, and if you're receiving the code as text anyway...)
But it doesn't work, it's only a partial solution (more below). The logic there is that any attempt within the code in codeString to access window (for instance) will access the local variable window, not the global; and the same for the others. Unfortunately, because of the way symbols are resolved, any property of window can be accessed with or without the window. prefix (alert, for instance), so you have to list those too. This could be a long list, not least because as bobince points out, IE dumps any DOM element with a name or an ID onto window. So you'd probably have to put all of this in its own iframe so you can do an end-run around that problem and "only" have to deal with the standard stuff. Also note how I made the scope function a property of an object, and then you only call it through the property. That's so that this is set to the Scoper instance (otherwise, on a raw function call, this defaults to window!).
But, as bobince points out, there are just so many different ways to get at things. For instance, this code in codeString successfully breaks the jail above:
(new ('hello'.constructor.constructor)('alert("hello from global");'))()
Now, maybe you could update the jail to make that specific exploit not work (mucking about with the constructor properties on all — all — of the built-in objects), but I tend to doubt it. And if you could, someone (like Bob) would just come up with a new exploit, like this one:
(function(){return this;})().alert("hello again from global!");
Hence the "arms race."
The only really thorough way to do this would be to have a proper Javascript parser built into your site, parse their code and check for illegal accesses, and only then let the code run. It's a lot of work, but if your use-case justifies it...
T.J. Crowder makes an excellent point about the "arms race." It's going to be very tough to build a watertight sandbox.
it's possible to override certain functions, though, quite easily.
Simple functions:
JavaScript: Overriding alert()
And according to this question, even overriding things like document.write is as simple as
document.write = function(str) {}
if that works in the browsers you need to support (I assume it works in all of them), that may be the best solution.
Alternative options:
Sandboxing the script into an IFrame on a different subdomain. It would be possible to manipulate its own DOM and emit alert()s and such, but the surrounding site would remain untouched. You may have to do this anyway, no matter which method(s) you choose
Parsing the user's code using a white list of allowed functions. Awfully complex to do as well, because there are so many notations and variations to take care of.
There are several methods to monitor the DOM for changes, and I'm pretty sure it's possible to build a mechanism that reverts any changes immediately, quite similar to Windows's DLL management. But it's going to be awfully complex to build and very resource-intensive.
Not really. JavaScript is an extremely dynamic language with many hidden or browser-specific features that can be used to break out of any kind of jail you can devise.
Don't try to take this on yourself. Consider using an existing ‘mini-JS-like-language’ project such as Caja.
Sounds like you need to process the user entered data and replace invalid mark-up based on a white list or black-list of allowed content.
You can do it the same way as Facebook did. They're preprocessing all the javascript sources, adding a prefix to all the names other than their own wrapper APIs'.
I got another way: use google gears WorkerPool api
See this
http://code.google.com/apis/gears/api_workerpool.html
A created worker does not have access
to the DOM; objects like document and
window exist only on the main page.
This is a consequence of workers not
sharing any execution state. However,
workers do have access to all
JavaScript built-in functions. Most
Gears methods can also be used,
through a global variable that is
automatically defined:
google.gears.factory. (One exception
is the LocalServer file submitter,
which requires the DOM.) For other
functionality, created workers can ask
the main page to carry out requests.
What about this pattern in order to implement a sandbox?
function safe(code,args)
{
if (!args)
args=[];
return (function(){
for (i in window)
eval("var "+i+";");
return function(){return eval(code);}.apply(0,args);
})();
}
ff=function()
{
return 3.14;
}
console.log(safe("this;"));//Number
console.log(safe("window;"));//undefined
console.log(safe("console;"));//undefined
console.log(safe("Math;"));//MathConstructor
console.log(safe("JSON;"));//JSON
console.log(safe("Element;"));//undefined
console.log(safe("document;"));//undefined
console.log(safe("Math.cos(arguments[0]);",[3.14]));//-0.9999987317275395
console.log(safe("arguments[0]();",[ff]));//3.14
That returns:
Number
undefined
undefined
MathConstructor
JSON
undefined
undefined
-0.9999987317275395
3.14
Can you please provide an exploit suitable to attack this solution ? Just to understand and improve my knowledge, of course :)
THANKS!
This is now easily possible with sandboxed IFrames:
var codeFunction = function(x, y) {
alert("Malicious code!");
return x + y;
}
var iframe = document.createElement("iframe");
iframe.sandbox = "allow-scripts";
iframe.style.display = "none";
iframe.src = `data:text/html,
<script>
var customFunction = ${codeFunction.toString()};
window.onmessage = function(e) {
parent.postMessage(customFunction(e.data.x, e.data.y), '*'); // Get arguments from input object
}
</script>`;
document.body.appendChild(iframe);
iframe.onload = function() {
iframe.contentWindow.postMessage({ // Input object
x: 5,
y: 6
}, "*");
}
window.onmessage = function(e) {
console.log(e.data); // 11
document.body.removeChild(iframe);
}
I find myself doing a lot of this kind of JQuery:
$('.filter-topic-id').each(function () {
var me = $(this);
if (me.text() == topics[k]) {
me.parent().show();
}
});
I store $(this) in a variable called me because I'm afraid it will re-evaluate $(this) for no reason. Are the major JavaScript engines smart enough to know that it doesn't have to re-evaluate it? Maybe even JQuery is smart enough somehow?
They are not smart enough to know not to revaluate $(this) again, if that's what your code says. Caching a jQuery object in a variable is a best practice.
If your question refers to your way in the question compared to this way
$('.filter-topic-id').each(function () {
if ($(this).text() == topics[k]) { // jQuery object created passing in this
$(this).parent().show(); // another jQuery object created passing in this
}
});
your way is the best practice.
Are the major JavaScript engines smart enough to know that it doesn't have to re-evaluate it?
No. But if you are using jQuery you are presumably aiming for readability rather than necessarily maximum performance.
Write whichever version you find easiest to read and maintain, and don't worry about micro-optimisations like this until your page is too slow and you've exhausted other more significant sources of delay. There is not a lot of work involved in calling $(node).
You could try to profile your code with Firebug and see if using $(this) many times slows your app or not
There is no good way that a javascript can determine that the following is true:-
fn(x) == fn(x);
Even if this was possible not calling the second fn could only be valid if it could be guaraneed that fn has not have other side-effects. When there is other code between calls to fn then its even more difficult.
Hence Javascript engines have no choice but to actually call fn each time it is invoked.
The overhead of calling $() is quite small but not insignificant. I would certainly hold the result in a local variable as you are doing.
this is just a reference to the current DOM element in the iteration, so there's little or no overhead involved when calling $(this). It just creates a jQuery wrapper around the DOM element.
I think you'll find that calling the jQuery function by passing a dom element is perhaps the least-intensive of ways to construct the object. It doesn't have to do any look-ups or query the DOM, just wrap it and return it.
That said, it definitely doesn't hurt to use the method you're using there, and that's what I do all the time myself. It certainly helps for when you create nested closures:
$('div').click(function() {
var $this = $(this);
$this.find("p").each(function() {
// here, there's no reference to the div other than by using $this
alert(this.nodeName); // "p"
});
});