JavaScript optimization: Caching math functions globally - javascript

I'm currently doing some "extreme" optimization on a JavaScript game engine I'm writing. And I have noticed I use math functions a lot! And I'm currently only caching them locally per function I use them in. So I was going to cache them at the global level in the window object using the below code.
var aMathFunctions = Object.getOwnPropertyNames(Math);
for (var i in aMathFunctions)
{
window[aMathFunctions[i]] = Math[aMathFunctions[i]];
}
Are there any major problems or side effects with this? Will I be overwriting existing functions in window, and will I be increasing my memory footprint dramatically? Or what else may go wrong?
EDIT: Below is an excerpt on reading I have done about JavaScript optimization that has lead me to try this.
Property Depth
Nesting objects in order to use dot notation is a great way to
namespace and organize your code. Unforutnately, when it comes to
performance, this can be a bit of a problem. Every time a value is
accessed in this sort of scenario, the interpreter has to traverse the
objects you've nested in order to get to that value. The deeper the
value, the more traversal, the longer the wait. So even though
namespacing is a great organizational tool, keeping things as shallow
as possible is your best bet at faster performance. The latest
incarnation of the YUI Library evolved to eliminate a whole layer of
nesting from its namespacing. So for example, YAHOO.util.Anim is now
Y.Anim.
Reference: http://www.phpied.com/extreme-javascript-optimization/

Edit: Should not matter anymore in Chrome due to this revision; perhaps caching is now even faster.
Don't do it, it's much slower when using global functions.
http://jsperf.com/math-vs-global
On Chrome:
sqrt(2); - 12,453,198 ops/second
Math.sqrt(2); - 542,475,219 ops/second
As for memory usage, globalizing it wouldn't be bad at all on the other hand. You just create another reference; the function itself will not be copied.

I am actually amazed it is faster for me on Mac OS X and Firefox 5 talking 5-8 ms difference in 50000 iterations.
console.time("a");
for (var i=0;i<50000;i++) {
var x = Math.floor(12.56789);
}
console.timeEnd("a");
var floor = Math.floor;
console.time("b");
for (var i=0;i<50000;i++) {
var y = floor(12.56789);
}
console.timeEnd("b");
I see only one real bonus if it will reduce the footprint of the code. I have not tested any other browsers so it may be a boost in one and slower in others.
Would it cause any problems? I don't see why it would unless you have things in global scope with those names. :)

Related

Adding methods to a Prototype vs existing Prototype Javascript [duplicate]

In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.

Javascript local variables - is it worth using them?

It gets said a lot that local variables are faster than globals in JavaScript Eg:
function ()
{
// Local variable
var a = 9
}
For instance, I've been thinking of aliasing the global Math object to a local Mathl variable or alternatively aliasing specific (much used) functions to a local variable/function like Mathround() instead of using Math.round().
Now the things that I'm thinking of doing this with (eg Math.round() ) can be used plenty of times per animation frame (50ish) and there could be 60 frames per second. So quite a few lookups would be avoided if I do this. - and that's just one example of something I could do it with. There'd be lots of similar variables I could alias.
So my question is - is this really worth it? Is the difference tangible when there's so many lookups being avoided?
If you don't know whether it's worth it, then it's probably not. In other words, you should address performance issues when they happen by identifying, measuring, and testing situation-specific alternatives.
It's hard enough to write clear, readable, maintainable code when you set out to write something humans can understand. It's a lot harder if you set out trying to write something that computers can execute faster (before you even know what faster means in terms of your application). Focus on clarity and address performance problems as they arise.
As for your specific example. If you're dying to know the answer, then test it.
Lookups will be faster if you scope parent-scope variables locally. How much faster? Is the difference noticable? Only you can tell by measuring the performance of your code.
It isin't that rare to see the document being aliased to a local variable.
Have a look at this part of the ExtJS library for instance.
(function() {
var DOC = document,
However like already stated, beware blindly aliasing object member functions because you might run into this value issues.
E.g.
var log = console.log;
log('test'); //TypeError: Illegal invocation
//you could use bind however
log = console.log.bind(console);
Say you have this code:
for(...) {
someFunc();
var y = Math.cos(x);
...
}
To execute the Math.cos(x) the JS VM must a) compute location of global Math object and b) get cos property of it. Just in case someone inside that someFunc() doesn't do crazy things as this for example:
Math = {};
In general in JS access to local variable is similar (if not exact) to access to array element by known index. Access to global objects is pretty much always look-up by key in maps.

Javascript single use variable declaration

This may be a obvious question, but I was wondering if there was an efficiency difference between declaring something in a one time use variable or simply executing the code once (rather then storing it then using it). For example
var rowId = 3,
updateStuff(rowId);
VS
updateStuff(3);
I think that worrying about performance in this instance is largely irrelevant. These sorts of micro-optimizations don't really buy you all that much. In my opinion, the readability of the code here is more important, since it's better to identify the 3 as rowId than leaving it as a magic number.
If you are really concerned about optimization, minimizing your code with something like Google Closure would help; it has many tools that will help you make your code more efficient.
If you were using google closure to compile and if you annotated the var as a constant (as it appears in your example), I suspect the compiler would remove the constant and replace it with the literal.
My advice is to pick the readable option and if you really need the micro-optimizations and reduction in source size, use the google closure compiler or a minifier-- a tool is more likeley to accumulate enough micro-optimizations to make something noticeably faster, although I'd guess most of the saved milliseconds would come from reduced network time from minification.
And var declarations do take time and if you added that var to the global namespace, then there is one more thing that will be searched during resolving names.
With a modern browser that compiles your code, the generated code will be exactly the same unless the variable is referenced somewhere else also.
Even with an old browser if there is a difference it won't be something to worry about.
EDIT:
delete does not apply to non-objects, therefore my initial response was wrong. Furtermore since the variable rowId is declared with the flag var it will be cleaned up by the garbage collection. Was it on the other hand defined without, it would live for the duration of the page / application.
source: http://lostechies.com/derickbailey/2012/03/19/backbone-js-and-javascript-garbage-collection/
The variable rowId will be kept in memory and will not be released by the garbage collector since it's referenced to. Unless you release it afterwards like below it will be there till the end of the programs life. Also remember it take time to create the variable (minimal, but you asked)
var rowId = 3
updateStuff(rowId);
delete rowId
With efficiency in mind, then yes there is a difference. Your second example is the quickest and doesn't spend any extra resources.
OBS. Some languages do optimize code as such and simply remove the sequence, but I strongly doubt that JavaScript does so.

Do modern JavaScript JITers need array-length caching in loops?

I find the practice of caching an array's length property inside a for loop quite distasteful. As in,
for (var i = 0, l = myArray.length; i < l; ++i) {
// ...
}
In my eyes at least, this hurts readability a lot compared with the straightforward
for (var i = 0; i < myArray.length; ++i) {
// ...
}
(not to mention that it leaks another variable into the surrounding function due to the nature of lexical scope and hoisting.)
I'd like to be able to tell anyone who does this "don't bother; modern JS JITers optimize that trick away." Obviously it's not a trivial optimization, since you could e.g. modify the array while it is being iterated over, but I would think given all the crazy stuff I've heard about JITers and their runtime analysis tricks, they'd have gotten to this by now.
Anyone have evidence one way or another?
And yes, I too wish it would suffice to say "that's a micro-optimization; don't do that until you profile." But not everyone listens to that kind of reason, especially when it becomes a habit to cache the length and they just end up doing so automatically, almost as a style choice.
It depends on a few things:
Whether you've proven your code is spending significant time looping
Whether the slowest browser you're fully supporting benefits from array length caching
Whether you or the people who work on your code find the array length caching hard to read
It seems from the benchmarks I've seen (for example, here and here) that performance in IE < 9 (which will generally be the slowest browsers you have to deal with) benefits from caching the array length, so it may be worth doing. For what it's worth, I have a long-standing habit of caching the array length and as a result find it easy to read. There are also other loop optimizations that can have an effect, such as counting down rather than up.
Here's a relevant discussion about this from the JSMentors mailing list: http://groups.google.com/group/jsmentors/browse_thread/thread/526c1ddeccfe90f0
My tests show that all major newer browsers cache the length property of arrays. You don't need to cache it yourself unless you're concerned about IE6 or 7, I don't remember exactly. However, I have been using another style of iteration since those days since it gives me another benefit which I'll describe in the following example:
var arr = ["Hello", "there", "sup"];
for (var i=0, str; str = arr[i]; i++) {
// I already have the item being iterated in the loop as 'str'
alert(str);
}
You must realize that this iteration style stops if the array is allowed to contain 'falsy' values, so this style cannot be used in that case.
First of all, how is this harder to do or less legible?
var i = someArray.length;
while(i--){
//doStuff to someArray[i]
}
This is not some weird cryptic micro-optimization. It's just a basic work avoidance principle. Not using the '.' or '[]' operators more than necessary should be as obvious as not recalculating pi more than once (assuming you didn't know we already have that in the Math object).
[rantish elements yoinked]
If someArray is entirely internal to a function it's fair game for JIT optimization of its length property which is really like a getter that actually counts up the elements of the array every time you access it. A JIT could see that it was entirely locally scoped and skip the actual counting behavior.
But this involves a fair amount of complexity. Every time you do anything that mutates that Array you have to treat length like a static property and tell your array altering methods (the native code side of them I mean) to set the property manually whereas normally length just counts the items up every time it's referenced. That means every time a new array-altering method is added you have to update the JIT to branch behavior for length references of a locally scoped array.
I could see Chrome doing this eventually but I don't think it is yet based on some really informal tests. I'm not sure IE will ever have this level of performance fine-tuning as a priority. As for the other browsers, you could make a strong argument for the maintenance issue of having to branch behavior for every new array method being more trouble than its worth. At the very least, it would not get top priority.
Ultimately, accessing the length property every loop cycle isn't going to cost you a ton even in the old browsers for a typical JS loop. But I would advise getting in the habit of caching any property lookup being done more than once because with getter properties you can never be sure how much work is being done, which browsers optimize in what ways or what kind of performance costs you could hit down the road when somebody decides to move someArray outside of the function which could lead to the call object checking in a dozen places before finding what it's looking for every time you do that property access.
Caching property lookups and method returns is easy, cleans your code up, and ultimately makes it more flexible and performance-robust in the face of modification. Even if one or two JITs did make it unnecessary in circumstances involving a number of 'ifs', you couldn't be certain they always would or that your code would continue to make it possible to do so.
So yes, apologies for the anti-let-the-compiler-handle-it rant but I don't see why you would ever want to not cache your properties. It's easy. It's clean. It guarantees better performance regardless of browser or movement of the object having its property's examined to an outer scope.
But it really does piss me off that Word docs load as slowly now as they did back in 1995 and that people continue to write horrendously slow-performing java websites even though Java's VM supposedly beats all non-compiled contenders for performance. I think this notion that you can let the compiler sort out the performance details and that "modern computers are SO fast" has a lot to do with that. We should always be mindful of work-avoidance, when the work is easy to avoid and doesn't threaten legibility/maintainability, IMO. Doing it differently has never helped me (or I suspect anybody) write the code faster in the long term.

in JavaScript, when finished with an object created via new ActiveXObject, do I need to set it to null?

In a Javascript program that runs within WSH and creates objects, let's say Scripting.FileSystemObject or any arbitrary COM object, do I need to set the variable to null when I'm finished with it? Eg, am I recommended to do this:
var fso = new ActiveXObject("Scripting.FileSystemObject");
var fileStream = fso.openTextFile(filename);
fso = null; // recommended? necessary?
... use fileStream here ...
fileStream.Close();
fileStream = null; // recommended? necessary?
Is the effect different than just letting the vars go out of scope?
Assigning null to an object variable will decrement the reference counter so that the memory management system can discard the resource - as soon as it feels like it. The reference counter will be decremented automagically when the variable goes out of scope. So doing it manually is a waste of time in almost all cases.
In theory a function using a big object A in its first and another big object B in its second part could be more memory efficient if A is set to null in the middle. But as this does not force the mms to destroy A, the statement could still be a waste.
You may get circular references if you do some fancy class design. Then breaking the circle by hand may be necessary - but perhaps avoiding such loops in the first place would be better.
There are rumours about ancient database access objects with bugs that could be avoided by zapping variables. I wouldn't base my programming rules on such voodoo.
(There are tons of VBscript code on the internet that is full of "Set X = Nothing"; when asked, the authors tend to talk about 'habit' and other languages (C, C++))
Building on what Ekkehard.Horner has said...
Scripts like VBScript, JScript, and ASP are executed within an environment that manages memory for you. As such, explicitly setting an object reference to Null or Empty, does not necessarily remove it from memory...at least not right away. (In practice it's often nearly instantaneous, but in actuality the task is added to a queue within the environment that is executed at some later point in time.) In this regard, it's really much less useful than you might think.
In compiled code, it's important to clean up memory before a program (or section of code in some cases) ends so that any allocated memory is returned to the system. This prevents all kinds of problems. Outside of slowly running code, this is most important when a program exits. In scripting environments like ASP or WSH, memory management takes care of this cleanup automatically when a script exits. So all object references are set to null for you even if you don't do it explicitly yourself which makes the whole mess unnecessary in this instance.
As far as memory concerns during script execution, if you are building arrays or dictionary objects large enough to cause problems, you've either gone way beyond the scope of scripting or you've taken the wrong approach in your code. In other words, this should never happen in VBScript. In fact, the environment imposes limits to the sizes of arrays and dictionary objects in order to prevent these problems in the first place.
If you have long running scripts which use objects at the top/start, which are unneeded during the main process, setting these objects to null may free up memory sooner and won't do any harm. As mentioned by other posters, there may be little practical benefit.

Categories