is it worth to optimize Javascript code replacing strings with constants? - javascript

I often works with jquery and sometimes my code contains lot of repeated css class names, like:
$('#div1').addClass('funny-class');
...
$('#div2').addClass('funny-class');
...
$('#div1').removeClass('funny-class');
etc... I was asking to myself if is it worth to clean the code using pseudo constants like:
var constants = {
FUNNY: 'funny-class'
}
...
$('#div1').addClass(cnst.FUNNY);
$('#div2').addClass(cnst.FUNNY);
$('#div1').removeClass(cnst.FUNNY);
First: minimization. If "constants" is a local var (hidden into a scope) probably the minimizer could replace the occurrence of "constants.FUNNY" with something minimized like "a.B".
Second: speed. The code with "constants" is faster than the first one?
What do you think?

You can use Google's closure compiler with advanced optimization to identify commonly repeated strings and replace them by constant variable references. But this optimization is marginal; if you want to improve your code you'd get a better improvement by caching jQuery objects (this is a matter usually overlooked by many programmers):
var $div1 = $('#div1');
var $div2 = $('#div2');
$div1.addClass('funny-class');
$div2.addClass('funny-class');
$div1.removeClass('funny-class');

It might not be noticeable as performance, but it is always a good practice to use constants instead of strings because you can easily modify the value by changing the constant's value.

Yes, a minifier will probably reduce the constants.FUNNY variable, where-as it probably won't detect the re-use of 'funny-class' and assign it to a variable.
The speed difference will be so marginal you shouldn't care. On one side you have to resolve the variable in the scope chain (very quick), but in the other you have to create a String primitive (also quick).
The benefits of having your constants object is if you decide to change the name of the class you assign, you only have to do it in one place, not 3; this is the sole benefit you should be considering... not a 0.0000000000000001 second speed difference.

Putting it in variables can provide you with a certain amount of "central control" rather than performance.
However, putting them deeply in an object will incur a performance penalty. Keep them near the surface as possible to avoid the overhead in scope resolution. (it's minimal, but still an overhead)
//this one is so deep:
constants.foo.bar.baz.bam
//this is just 1 level deep:
constants.bam
Also, I'd worry more about the jQuery calls you are making
//two calls to #div1!
$('#div1').addClass(cnst.FUNNY); //$() for div1
$('#div2').addClass(cnst.FUNNY);
$('#div1').removeClass(cnst.FUNNY); //another $() for div1
//do this instead
var div1 = $('#div1') //reference div1 once in a local variable
div1.addClass(cnst.FUNNY); //use the reference to the object
$('#div2').addClass(cnst.FUNNY);
div1.removeClass(cnst.FUNNY); //use the same reference to the object

Related

Why calling window.alert() is slower than alert() in JavaScript? [duplicate]

I'm kind of curious about what the best practice is when referencing the 'global' namespace in javascript, which is merely a shortcut to the window object (or vice versia depending on how you look at it).
I want to know if:
var answer = Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
Is one better or worse, even slightly, for performance, resource usage, or compatibility?
Does one have a slighter higher cost? (Something like an extra pointer or something)
Edit note: While I am a readability over performance nazi in most situations, in this case I am ignoring the differences in readability to focus solely on performance.
First of all, never compare things like these for performance reasons. Math.round is obviously easier on the eyes than window.Math.round, and you wouldn't see a noticeable performance increase by using one or the other. So don't obfuscate your code for very slight performance increases.
However, if you're just curious about which one is faster... I'm not sure how the global scope is looked up "under the hood", but I would guess that accessing window is just the same as accessing Math (window and Math live on the same level, as evidenced by window.window.window.Math.round working). Thus, accessing window.Math would be slower.
Also, the way variables are looked up, you would see a performance increase by doing var round = Math.round; and calling round(1.23), since all names are first looked up in the current local scope, then the scope above the current one, and so on, all the way up to the global scope. Every scope level adds a very slight overhead.
But again, don't do these optimizations unless you're sure they will make a noticeable difference. Readable, understandable code is important for it to work the way it should, now and in the future.
Here's a full profiling using Firebug:
<!DOCTYPE html>
<html>
<head>
<title>Benchmark scope lookup</title>
</head>
<body>
<script>
function bench_window_Math_round() {
for (var i = 0; i < 100000; i++) {
window.Math.round(1.23);
}
}
function bench_Math_round() {
for (var i = 0; i < 100000; i++) {
Math.round(1.23);
}
}
function bench_round() {
for (var i = 0, round = Math.round; i < 100000; i++) {
round(1.23);
}
}
console.log('Profiling will begin in 3 seconds...');
setTimeout(function () {
console.profile();
for (var i = 0; i < 10; i++) {
bench_window_Math_round();
bench_Math_round();
bench_round();
}
console.profileEnd();
}, 3000);
</script>
</body>
</html>
My results:
Time shows total for 100,000 * 10 calls, Avg/Min/Max show time for 100,000 calls.
Calls Percent Own Time Time Avg Min Max
bench_window_Math_round
10 86.36% 1114.73ms 1114.73ms 111.473ms 110.827ms 114.018ms
bench_Math_round
10 8.21% 106.04ms 106.04ms 10.604ms 10.252ms 13.446ms
bench_round
10 5.43% 70.08ms 70.08ms 7.008ms 6.884ms 7.092ms
As you can see, window.Math is a really bad idea. I guess accessing the global window object adds additional overhead. However, the difference between accessing the Math object from the global scope, and just accessing a local variable with a reference to the Math.round function isn't very great... Keep in mind that this is 100,000 calls, and the difference is only 3.6ms. Even with one million calls you'd only see a 36ms difference.
Things to think about with the above profiling code:
The functions are actually looked up from another scope, which adds overhead (barely noticable though, I tried importing the functions into the anonymous function).
The actual Math.round function adds overhead (I'm guessing about 6ms in 100,000 calls).
This can be an interest question if you want to know how the Scope Chain and the Identifier Resolution process works.
The scope chain is a list of objects that are searched when evaluating an identifier, those objects are not accessible by code, only its properties (identifiers) can be accessed.
At first, in global code, the scope chain is created and initialised to contain only the global object.
The subsequent objects in the chain are created when you enter in function execution context and by the with statement and catch clause, both also introduce objects into the chain.
For example:
// global code
var var1 = 1, var2 = 2;
(function () { // one
var var3 = 3;
(function () { // two
var var4 = 4;
with ({var5: 5}) { // three
alert(var1);
}
})();
})();
In the above code, the scope chain will contain different objects in different levels, for example, at the lowest level, within the with statement, if you use the var1 or var2 variables, the scope chain will contain 4 objects that will be needed to inspect in order to get that identifier: the one introduced by the with statement, the two functions, and finally the global object.
You also need to know that window is just a property that exists in the global object and it points to the global object itself. window is introduced by browsers, and in other environments often it isn't available.
In conclusion, when you use window, since it is just an identifier (is not a reserved word or anything like that) and it needs to pass all the resolution process in order to get the global object, window.Math needs an additional step that is made by the dot (.) property accessor.
JS performance differs widely from browser to browser.
My advice: benchmark it. Just put it in a for loop, let it run a few million times, and time it.... see what you get. Be sure to share your results!
(As you've said) Math.floor will probably just be a shortcut for window.Math (as window is a Javascript global object) in most Javascript implementations such as V8.
Spidermonkey and V8 will be so heavily optimised for common usage that it shouldn't be a concern.
For readability my preference would be to use Math.floor, the difference in speed will be so insignificant it's not worth worrying about ever. If you're doing a 100,000 floors it's probably time to switch that logic out of the client.
You may want to have a nose around the v8 source there's some interesting comments there about shaving nanoseconds off functions such as this int.Parse() one.
// Some people use parseInt instead of Math.floor. This
// optimization makes parseInt on a Smi 12 times faster (60ns
// vs 800ns). The following optimization makes parseInt on a
// non-Smi number 9 times faster (230ns vs 2070ns). Together
// they make parseInt on a string 1.4% slower (274ns vs 270ns).
As far as I understand JavaScript logic, everything you refer to as something is searched in the global variable scope. In browser implementations, the window object is the global object. Hence, when you are asking for window.Math you actually have to de-reference what window means, then get its properties and find Math there. If you simply ask for Math, the first place where it is sought, is the global object.
So, yes- calling Math.something will be faster than window.Math.something.
D. Crockeford talks about it in his lecture http://video.yahoo.com/watch/111593/1710507, as far as I recall, it's in the 3rd part of the video.
If Math.round() is being called in a local/function scope the interpreter is going to have to check first for a local var then in the global/window space. So in local scope my guess would be that window.Math.round() would be very slightly faster. This isn't assembly, or C or C++, so I wouldn't worry about which is faster for performance reasons, but if out of curiosity, sure, benchmark it.

Should I be caching jQuery selectors in the global namespace? [duplicate]

I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).

How many globals make sense to be passed to the IIFE wrapper?

To which extent does it make sense to pass plenty of global values to an IIFE?
The common thing is just to pass 3 as far as I see everywhere (window, document and undefined).
But... would it make sense to pass more if they are used more than 10 times in the code just for the fact of minification?
In my case I found the global variable Math 14 times in the code. It would make sense to pass it to an IIFE in order to save 42 bytes. Which in this case is not a lot, but if we sum bit by bit different global variables, then it would always make sense to pass as many global variables as possible, right? (Symbol, Object, Error, Date, JSON...)
(function($, window, document, Math, undefined) {
$.fn.mydemo = function() {
};
}(jQuery, window, document, Math));
Then, why isn't this a common approach?
Update:
To explain the 42 bytes of reduction:
Math = 4 characteres
1 character = 1 byte
14 times Math = 56 bytes
Math will get replaced by a single character after minification
As the function can be defined as function($, w, d, m, u)
14 characters of the shorten word Math (m) = 14 bytes
56 - 14 = 42 bytes of reduction
First of all, those values are not IIFEs.
And this is not about “saving characters” by having shorter variables names inside the function (at least not mainly), but rather about variable lookup and the “cost” associated with it.
If you were to use f.e. document inside your function without passing it in, then first a variable named document would be searched in the scope of the function, and only when that fails, search would continue in the scope above that, and so on.
That is the reason for passing such objects as parameters into the function – so that a direct reference to them within the function scope exists, and they do not have to be looked up in higher outside scopes.
Sometimes, you might even see this used in such a form like this:
(function(document) {
// do something with document, such as:
document.foo();
document.bar = "baz";
})(document);
– in that form, it should be even more clear that this is not about saving characters in variable names. The object is still referred to as document inside the function (which makes it clear what it is supposed to represent – the global document object), and the only effect achieved by this is said shorter lookup.
There are a number of cases where it makes sense to pass variables to an IIFE.
Aliasing
Passing a variable to an IIFE allows you to rename the variable within the function. This is commonly seen when using jQuery, particularly when noConflict is used:
(function ($) {
//in here $ will be the same as jQuery
}(jQuery));
Aliasing also helps minifiers to minify code, when you see something like:
(function (document, slice, Math) {
...
}(document, Array.prototype.slice, Math));
The minifier can rename the parameters to whatever it wants, and save you bytes. For large scripts using these properties a lot, it can be significant savings when it gets turned into:
(function(a,b,c){...}(document,Array.prototype.slice,Math));
Portability
This is more of an edge case than a general rule, but it's common to see a global IIF in the form of:
(function (global /* or window */) {
...
}(this));
This allows for portability between node.js and the browser so that the global variable has the same name in both environments.
Character Savings
While I already mentioned that minifiers can reduce the character count by changing the names of aliases, you may want to do this manually if you're participating in a code golf challenge.
Reference Safety
If you're authoring a script that must work in whatever environment its dumped into (think google analytics), you'll want to be sure that the global methods you're calling are what you expect. Storing a reference to those functions by passing them as parameters is one way to preserve the reference to the functions from becoming overridden by a malicious or ignorant programmer.
To answer the question in your title:
How many globals make sense to be passed to the IIFE wrapper?
As many as you need and no more. If you need to alias one or two variables, pass one or two references. If you need to be sure that the global functions aren't being changed, you may end up with 100 parameters. There's no hard-and-fast rule on this.
would it make sense to pass more if they are used more than 10 times in the code just for the fact of minification?
If you care that much about minification, sure, why not?
The common thing is just to pass 3 as far as I see everywhere (window, document and undefined)
Yes, altough you see not passing document, or passing jQuery (aliased as $), just as often. And of course it's not only about minification, but about performance, and you only care for window and document on that behalf.
it would always make sense to pass as many global variables as possible, right?
Well, except you don't use them in your code. Symbol, Object, Error, Date, JSON, Math and the others are not needed that often in most code. And developers don't like to do those byte counts you are suggesting every time they change a bit of code, so this IEFE boilerplate just stays as it is (and imho there's much cargo cult to it).
You would let your minifier do this automatically if you'd really care.

Caching of javascript variables [duplicate]

I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).

In javascript, is accessing 'window.Math' slower or faster than accessing the 'Math' object without the 'window.'?

I'm kind of curious about what the best practice is when referencing the 'global' namespace in javascript, which is merely a shortcut to the window object (or vice versia depending on how you look at it).
I want to know if:
var answer = Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
Is one better or worse, even slightly, for performance, resource usage, or compatibility?
Does one have a slighter higher cost? (Something like an extra pointer or something)
Edit note: While I am a readability over performance nazi in most situations, in this case I am ignoring the differences in readability to focus solely on performance.
First of all, never compare things like these for performance reasons. Math.round is obviously easier on the eyes than window.Math.round, and you wouldn't see a noticeable performance increase by using one or the other. So don't obfuscate your code for very slight performance increases.
However, if you're just curious about which one is faster... I'm not sure how the global scope is looked up "under the hood", but I would guess that accessing window is just the same as accessing Math (window and Math live on the same level, as evidenced by window.window.window.Math.round working). Thus, accessing window.Math would be slower.
Also, the way variables are looked up, you would see a performance increase by doing var round = Math.round; and calling round(1.23), since all names are first looked up in the current local scope, then the scope above the current one, and so on, all the way up to the global scope. Every scope level adds a very slight overhead.
But again, don't do these optimizations unless you're sure they will make a noticeable difference. Readable, understandable code is important for it to work the way it should, now and in the future.
Here's a full profiling using Firebug:
<!DOCTYPE html>
<html>
<head>
<title>Benchmark scope lookup</title>
</head>
<body>
<script>
function bench_window_Math_round() {
for (var i = 0; i < 100000; i++) {
window.Math.round(1.23);
}
}
function bench_Math_round() {
for (var i = 0; i < 100000; i++) {
Math.round(1.23);
}
}
function bench_round() {
for (var i = 0, round = Math.round; i < 100000; i++) {
round(1.23);
}
}
console.log('Profiling will begin in 3 seconds...');
setTimeout(function () {
console.profile();
for (var i = 0; i < 10; i++) {
bench_window_Math_round();
bench_Math_round();
bench_round();
}
console.profileEnd();
}, 3000);
</script>
</body>
</html>
My results:
Time shows total for 100,000 * 10 calls, Avg/Min/Max show time for 100,000 calls.
Calls Percent Own Time Time Avg Min Max
bench_window_Math_round
10 86.36% 1114.73ms 1114.73ms 111.473ms 110.827ms 114.018ms
bench_Math_round
10 8.21% 106.04ms 106.04ms 10.604ms 10.252ms 13.446ms
bench_round
10 5.43% 70.08ms 70.08ms 7.008ms 6.884ms 7.092ms
As you can see, window.Math is a really bad idea. I guess accessing the global window object adds additional overhead. However, the difference between accessing the Math object from the global scope, and just accessing a local variable with a reference to the Math.round function isn't very great... Keep in mind that this is 100,000 calls, and the difference is only 3.6ms. Even with one million calls you'd only see a 36ms difference.
Things to think about with the above profiling code:
The functions are actually looked up from another scope, which adds overhead (barely noticable though, I tried importing the functions into the anonymous function).
The actual Math.round function adds overhead (I'm guessing about 6ms in 100,000 calls).
This can be an interest question if you want to know how the Scope Chain and the Identifier Resolution process works.
The scope chain is a list of objects that are searched when evaluating an identifier, those objects are not accessible by code, only its properties (identifiers) can be accessed.
At first, in global code, the scope chain is created and initialised to contain only the global object.
The subsequent objects in the chain are created when you enter in function execution context and by the with statement and catch clause, both also introduce objects into the chain.
For example:
// global code
var var1 = 1, var2 = 2;
(function () { // one
var var3 = 3;
(function () { // two
var var4 = 4;
with ({var5: 5}) { // three
alert(var1);
}
})();
})();
In the above code, the scope chain will contain different objects in different levels, for example, at the lowest level, within the with statement, if you use the var1 or var2 variables, the scope chain will contain 4 objects that will be needed to inspect in order to get that identifier: the one introduced by the with statement, the two functions, and finally the global object.
You also need to know that window is just a property that exists in the global object and it points to the global object itself. window is introduced by browsers, and in other environments often it isn't available.
In conclusion, when you use window, since it is just an identifier (is not a reserved word or anything like that) and it needs to pass all the resolution process in order to get the global object, window.Math needs an additional step that is made by the dot (.) property accessor.
JS performance differs widely from browser to browser.
My advice: benchmark it. Just put it in a for loop, let it run a few million times, and time it.... see what you get. Be sure to share your results!
(As you've said) Math.floor will probably just be a shortcut for window.Math (as window is a Javascript global object) in most Javascript implementations such as V8.
Spidermonkey and V8 will be so heavily optimised for common usage that it shouldn't be a concern.
For readability my preference would be to use Math.floor, the difference in speed will be so insignificant it's not worth worrying about ever. If you're doing a 100,000 floors it's probably time to switch that logic out of the client.
You may want to have a nose around the v8 source there's some interesting comments there about shaving nanoseconds off functions such as this int.Parse() one.
// Some people use parseInt instead of Math.floor. This
// optimization makes parseInt on a Smi 12 times faster (60ns
// vs 800ns). The following optimization makes parseInt on a
// non-Smi number 9 times faster (230ns vs 2070ns). Together
// they make parseInt on a string 1.4% slower (274ns vs 270ns).
As far as I understand JavaScript logic, everything you refer to as something is searched in the global variable scope. In browser implementations, the window object is the global object. Hence, when you are asking for window.Math you actually have to de-reference what window means, then get its properties and find Math there. If you simply ask for Math, the first place where it is sought, is the global object.
So, yes- calling Math.something will be faster than window.Math.something.
D. Crockeford talks about it in his lecture http://video.yahoo.com/watch/111593/1710507, as far as I recall, it's in the 3rd part of the video.
If Math.round() is being called in a local/function scope the interpreter is going to have to check first for a local var then in the global/window space. So in local scope my guess would be that window.Math.round() would be very slightly faster. This isn't assembly, or C or C++, so I wouldn't worry about which is faster for performance reasons, but if out of curiosity, sure, benchmark it.

Categories