Do modern JavaScript JITers need array-length caching in loops? - javascript

I find the practice of caching an array's length property inside a for loop quite distasteful. As in,
for (var i = 0, l = myArray.length; i < l; ++i) {
// ...
}
In my eyes at least, this hurts readability a lot compared with the straightforward
for (var i = 0; i < myArray.length; ++i) {
// ...
}
(not to mention that it leaks another variable into the surrounding function due to the nature of lexical scope and hoisting.)
I'd like to be able to tell anyone who does this "don't bother; modern JS JITers optimize that trick away." Obviously it's not a trivial optimization, since you could e.g. modify the array while it is being iterated over, but I would think given all the crazy stuff I've heard about JITers and their runtime analysis tricks, they'd have gotten to this by now.
Anyone have evidence one way or another?
And yes, I too wish it would suffice to say "that's a micro-optimization; don't do that until you profile." But not everyone listens to that kind of reason, especially when it becomes a habit to cache the length and they just end up doing so automatically, almost as a style choice.

It depends on a few things:
Whether you've proven your code is spending significant time looping
Whether the slowest browser you're fully supporting benefits from array length caching
Whether you or the people who work on your code find the array length caching hard to read
It seems from the benchmarks I've seen (for example, here and here) that performance in IE < 9 (which will generally be the slowest browsers you have to deal with) benefits from caching the array length, so it may be worth doing. For what it's worth, I have a long-standing habit of caching the array length and as a result find it easy to read. There are also other loop optimizations that can have an effect, such as counting down rather than up.
Here's a relevant discussion about this from the JSMentors mailing list: http://groups.google.com/group/jsmentors/browse_thread/thread/526c1ddeccfe90f0

My tests show that all major newer browsers cache the length property of arrays. You don't need to cache it yourself unless you're concerned about IE6 or 7, I don't remember exactly. However, I have been using another style of iteration since those days since it gives me another benefit which I'll describe in the following example:
var arr = ["Hello", "there", "sup"];
for (var i=0, str; str = arr[i]; i++) {
// I already have the item being iterated in the loop as 'str'
alert(str);
}
You must realize that this iteration style stops if the array is allowed to contain 'falsy' values, so this style cannot be used in that case.

First of all, how is this harder to do or less legible?
var i = someArray.length;
while(i--){
//doStuff to someArray[i]
}
This is not some weird cryptic micro-optimization. It's just a basic work avoidance principle. Not using the '.' or '[]' operators more than necessary should be as obvious as not recalculating pi more than once (assuming you didn't know we already have that in the Math object).
[rantish elements yoinked]
If someArray is entirely internal to a function it's fair game for JIT optimization of its length property which is really like a getter that actually counts up the elements of the array every time you access it. A JIT could see that it was entirely locally scoped and skip the actual counting behavior.
But this involves a fair amount of complexity. Every time you do anything that mutates that Array you have to treat length like a static property and tell your array altering methods (the native code side of them I mean) to set the property manually whereas normally length just counts the items up every time it's referenced. That means every time a new array-altering method is added you have to update the JIT to branch behavior for length references of a locally scoped array.
I could see Chrome doing this eventually but I don't think it is yet based on some really informal tests. I'm not sure IE will ever have this level of performance fine-tuning as a priority. As for the other browsers, you could make a strong argument for the maintenance issue of having to branch behavior for every new array method being more trouble than its worth. At the very least, it would not get top priority.
Ultimately, accessing the length property every loop cycle isn't going to cost you a ton even in the old browsers for a typical JS loop. But I would advise getting in the habit of caching any property lookup being done more than once because with getter properties you can never be sure how much work is being done, which browsers optimize in what ways or what kind of performance costs you could hit down the road when somebody decides to move someArray outside of the function which could lead to the call object checking in a dozen places before finding what it's looking for every time you do that property access.
Caching property lookups and method returns is easy, cleans your code up, and ultimately makes it more flexible and performance-robust in the face of modification. Even if one or two JITs did make it unnecessary in circumstances involving a number of 'ifs', you couldn't be certain they always would or that your code would continue to make it possible to do so.
So yes, apologies for the anti-let-the-compiler-handle-it rant but I don't see why you would ever want to not cache your properties. It's easy. It's clean. It guarantees better performance regardless of browser or movement of the object having its property's examined to an outer scope.
But it really does piss me off that Word docs load as slowly now as they did back in 1995 and that people continue to write horrendously slow-performing java websites even though Java's VM supposedly beats all non-compiled contenders for performance. I think this notion that you can let the compiler sort out the performance details and that "modern computers are SO fast" has a lot to do with that. We should always be mindful of work-avoidance, when the work is easy to avoid and doesn't threaten legibility/maintainability, IMO. Doing it differently has never helped me (or I suspect anybody) write the code faster in the long term.

Related

Time complexity of Javascript Array.find() in modern browsers

Since array.find() iterates over an array, if I handle (potentially) large arrays, I always make sure to have an indexed object like so:
{ [id:string]: Item }
if I need to look up items by id in these arrays.
However, living in a time of V8 (and comparable engine optimisations for Safari and Firefox), I'm wondering if maybe under the hood, a simple array.find() is already optimized for it? Or will optimize for it (create such an indexed object) at runtime as soon as it has to perform this operation once?
Is it true that modern browsers already have some kind of optimization for O(N) type algorithms that could become O(1) with the proper implementation? Or am I thinking too much of what these browsers actually can / will do under the hood?
V8 developer here. The time complexity of Array.prototype.find is O(n) (with n being the array's length), and it's fair to assume that it will remain that way.
Generally speaking, it's often impossible for engines to improve the complexity class of an operation. In case of Array.prototype.find, the predicate function you pass might well care how often it gets called:
[1, 2, 3].find((value, index, object) => {
console.log(`Checking ${value}...`); // Or any other side effect.
return value === 42;
});
In such a case, the engine has no choice but to iterate over the entire array in exactly the right order, because anything else would observably break your program's behavior.
In theory, since JS engines can do dynamic optimizations, they could inspect the predicate function, and if it has no side effects, they could use it to build up some sort of index/cache. Aside from the difficulty of building such a system that works for arbitrary predicates, this technique even when it does work would only speed up repeated searches of the same array with the same function, at the cost of wasting time and memory if this exact same scenario will not occur again. It seems unlikely that an engine can ever make this prediction with sufficient confidence to justify investing this time and memory.
As a rule of thumb: when operating on large data sets, choosing efficient algorithms and data structures is worth it. Typically far more worth it than the micro-optimizations we're seeing so much in SO questions :-)
A highly optimized/optimizing engine may be able to make your O(n) code somewhere between 10% and 10x as fast as it would otherwise be. By switching to an O(log n) or O(1) solution on your end, you can speed it up by orders of magnitude. That's often accomplished by doing something that engines can't possibly do. For example, you can keep your array sorted and then use binary search over it -- that's something an engine can't do for you automatically because obviously it's not allowed to reorder your array's contents without your approval. And as #myf already points out in a comment: if you want to access things by a unique key, then using a Map will probably work better than using an Array.
That said, simple solutions tend to scale better than we intuitively assume; the standard warning against premature optimizations applies here just as everywhere else. Linearly searching through arrays is often just fine, you don't need a (hash) map just because you have more than three items in it. When in doubt, profile your app to find out where the performance bottlenecks are.

How does hidden classes really avoid dynamic lookups?

So we have all heard that v8 uses the thing called hidden classes where when many objects have the same shape, they just store a pointer to the shape struct which stores fixed offsets. I have heard this a million time, and I very much get how this reduces memory usage by A LOT (not having to store a map for each one is amazing) and potentially because of that a bit faster performance.
However I still don't understand how it avoids dynamic lookup. The only thing I have heard is storing a cache between a string (field name) and a fixed offset, and checking it every time, but if there's a cache miss (which is likely to happen) there will still be a dyanmic lookup.
Everyone says that this is almost as fast as C++ field access (which are just a mov instruction usually), however this 1 field access cache isn't even close.
Look at the following function:
function getx(o)
{
return o.x;
}
How will v8 make the access of the x field so fast and avoid dynamic lookup?
(V8 developer here.)
The key is that hidden classes allow caching. So, certainly, a few dynamic lookups will still be required. But thanks to caching, they don't have to be repeated every single time a property access like o.x is executed.
Essentially, the first time your example function getx is called, it'll have to do a full property lookup, and it'll cache both the hidden class that was queried and the result of the lookup. On subsequent calls, it'll simply check the incoming o's hidden class against the cached data, and if it matches, it uses the cached information about how to access the property. Of course, on mismatch, it has to fall back to a dynamic lookup again. (In reality it's a bit more complicated because of additional tradeoffs that are involved, but that's the basic idea.)
So if things go well, a single dynamic lookup is sufficient for an arbitrary number of calls to such a function. In practice, things go well often enough that if you started with a very simple JS engine that didn't do any such tricks, and you added caching of dynamic lookup results, you could expect approximately a 10x performance improvement across the board. Adding an optimizing compiler can then take that idea one step further and embed cached results right into the optimized code; that can easily give another 10x improvement and approach C-like performance: code still needs to contain hidden class checks, but on successful check can read properties with a single instruction. (This is also why JS engines can't just "optimize everything immediately" even if they wanted to: optimization crucially depends on well-populated caches.)

Why is destructuring an array in javascript slower than for an object?

I have ran a few different variations of this, but this is the basic test that I have made on jsbench.me:
https://jsbench.me/j2klgojvih/1
This initial benchmark has an obvious initial optimization that makes the object destructure significantly faster. If you move the declaration of t into each test block, that underlying optimization disappears, but the array destructure still loses.
The test is a simple concept represented by:
const t = [1, 2, 3];
// Test 1 (Slower)
const [x, y, z] = t;
// Test 2 (Faster)
const {0: x, 1: y, 2: z} = t;
I would think V8 (or any JS engine) could/should run the array destructuring faster; however, I have not been able to make a variation of the test where that is the case.
If I were to poke a guess at the reasoning, it'd be that array destructuring runs some iterator to loop through the array.
(V8 developer here.)
If I were to poke a guess at the reasoning, it'd be that array destructuring runs some iterator to loop through the array.
Yup. The spec pretty much demands it that way.
WHY would you iterate for statically known properties?
In JavaScript, significantly fewer things are "statically known" than it might seem at first. And even if they're statically derivable in a microbenchmark, that might not be enough reason to optimize for them, because real-world code tends to be a lot more complicated.
I am definitely asking this for the purpose of micro-optimization.
Be aware that microbenchmarks are usually misleading, even for micro-optimizations. If you real use-case is different from the benchmark, then the benchmark's results are very likely not going to be representative, and as such may well lead you to wasting time on things that don't help or are even counter-productive.
In this particular case, I have no reason to doubt that array destructuring will likely be somewhat slower than object destructuring regardless of circumstances; but the relative difference and hence whether it matters depend a lot on the situation (factors such as: function size, call count, inlineability, are the results used or ignored, are the inputs constant or changing, ...).
So, I'm looking to see if this is likely to remain steady for a long time, or if it's something just not addressed yet.
I don't know whether there is much untapped performance potential in array destructuring, nor whether/when someone might look into it.
It's not designed to be incredibly performant
Oh, yes, it is; and we keep working hard to make it even more performant.

Why is 'delete' slow in javascript?

I just stumbled upon this jsperf result: http://jsperf.com/delet-is-slow
It shows that using delete is slow in javascript but I am not sure I get why. What is the javascript engine doing behind the scene to make things slow?
I think the question is not why delete is slow... The speed of a simple delete operation is not worth measuring...
The JS perf link that you show does the following:
Create two arrays of 6 elements each.
Delete at one of the indexes of one array.
Iterate through all the indexes of each array.
The script shows that iterating through an array o which delete was applied is slower than iterating though a normal array.
You should ask yourself, why delete makes an array slow?
The engine internally stores array elements in contiguous memory space, and access them using an numeric indexer.
That's what they call a fast access array.
If you delete one of the elements in this ordered and contiguous index, you force the array to mutate into dictionary mode... thus, what before was the exact location of the item in the array (the indexer) becomes the key in the dictionary under which the array has to search for the element.
So iterating becomes slow, because don't move into the next space in memory anymore, but you perform over and over again a hash search.
You'll get a lot of answers here about micro-optimisation but delete really does sometimes have supreme problems where it becomes incredibly slow in certain scenarios that people must be aware of in JS. These are to my knowledge edge cases and may or may not apply to you.
I recommend to profile and benchmark in different browsers to detect these anomalies.
I honestly don't know the reasons as I tend to workaround it this but I would guess combinations of quirks in the GC (it is might be getting invoked too often), brutal rehashing, optimisations for other cases and weird object structure/bad time complexity.
The cases usually involve moderate to large numbers of keys, for example:
Delete from objects with many keys:
(function() {
var o={},s,i,c=console;
s=new Date();for(i=0;i<1000000;i+=10)o[i]=i;c.log('Set: '+(new Date()-s));
s=new Date();for(i=0;i<50000;i+=10)delete(o[i]);c.log('Delete: '+(new Date()-s));})();
Chrome:
Set: 21
Delete: 2084
Firefox:
Set: 74
Delete: 2
I have encountered a few variations of this and they are not always easy to reproduce. A signature is that it usually seems to degrade exponentially. In one case in Firefox delete inside a for in loop would degrade to around 3-6 operations a second where as deleting when iterating Object.keys would be fine.
I personally tend to think that these cases can be considered bugs. You get massive asymptotic and disproportionate performance degradation that you can work around in ways that shouldn't change the time or space complexity or that might even normally make performance moderately worse. This means that when considered as a declarative language JS gets the implementation/optimisations wrong. Map does not have the same problem with delete so far that I have seen.
Reasons:
To be sure, you would have to look into the source code or run some profiling.
delete in various scenarios can change speed arbitrarily based on how engines are written and this can change from version to version.
JavaScript objects tend to not be used with large amounts of properties and delete is called relatively infrequently in every day usage. They're also used as part of the language heavily (they're actually just associative arrays). Virtually everything relies on an implementation of an object. IF you create a function, that makes an object, if you put in a numeric literal it's actually an object.
It's quite possible for it to be slow purely because it hasn't been well optimised (either neglect or other priorities) or due to mistakes in implementation.
There are some common possible causes aside from optimisation deficit and mistakes.
Garbage Collection:
A poorly implemented delete function may inadvertently trigger garbage collection excessively.
Garbage collection has to iterate everything in memory to find out of there are any orphans, traversing variables and references as a graph.
The need to detect circular references can make GC especially expensive. GC without circular reference detection can be done using reference counters and a simple check. Circular reference checking requires traversing the reference graph or another structure for managing references and in either case that's a lot slower than using reference counters.
In normal implementations, rather than freeing and rearranging memory every time something is deleted, things are usually marked as possible to delete with GC deferred to perform in bulk or batches intermittently.
Mistakes can potentially lead to the entire GC being triggered too frequently. That may involve someone having put an instruction to run the GC every delete or a mistake in its statistical heuristics.
Resizing:
It's quite possible for large objects as well the memory remapping to shrink them might not be well optimised. When you have dynamically sized structures internally in the engine it can be expensive to resize them. Engines will also have varying levels of their own memory management on top of that of that provides by the operating system which will significantly complicate things.
Where an engine manages its own memory efficiently, even a delete that deletes an object efficiently without the need for a full GC run (has no circular references) this can trigger the need to rearrange memory internally to fill the gap.
That may involve reallocating data of the new size, then copying everything from the old memory to the new memory before freeing the old memory. It can also require updating all of the pointers to the old memory in some cases (IE, where a pointer pointer [all roads lead to Rome] wasn't used).
Rehashing:
It may also rehash (needs as the object is an associative array) on deletes. Often people only rehash on demand, when there are hash collisions but some engines may also rehash on deletes.
Rehashing on deletes prevents a problem where you can add 10 items to an object, then add a million objects, then remove those million objects and the object left with the 10 items will both take up more memory and be slower.
A hash with ten items needs ten slots with an optimum hash, though it'll actually require 16 slots. That times the size of a pointer will be 16 * 8 bytes or 128bytes. When you add the million items then it needs a million slots, 20 bit keys or 8 megabytes. If you delete the million keys without rehashing it then the object you have with ten items is taking up 8 megabyte when it only needs 128 bytes. That makes it important to rehash on item removal.
The problem with this is that when you add you know if you need to rehash or not because there will be a collision. When deleting, you don't know if you need to rehash or not. It's easy to make a performance mistake with that and rehash every time.
There are a number of strategies for reasonable downhashing intervals although without making things complicated it can be easy to make a mistake. Simple approaches tend to work moderately well (IE, time since last rehash, size of key versus minimum size, history of collision pairs, tombstone keys and delete in bulk as part of GC, etc) on average but can also easily get stuck in corner cases. Some engines might switch to a different hash implementation for large objects such as nested where as others might try to implement one for all.
Rehashing tends to work the same as resizing for simply implementations, make an entirely new one then insert the old one into it. However for rehashing a lot more needs to be done beforehand.
No Bulk:
It doesn't let you delete a bunch of things at once from a hash. It's usually much more efficient to delete items from a hash in bulk (most operations on the same thing work better in bulk) but the delete keyword only allows you to delete one by one. That makes it slow by design for cases with multiple deletes on the same object.
Due to this, only a handful of implementation of delete would be comparable to creating a new object and inserting the items you want to keep for speed (though this doesn't explain why with some engines delete is slow in its own right for a single call).
V8:
Slow delete of object properties in JS in V8
Apparently this was caused due to switching out implementations and a problem similar to but different to the downsizing problem seen with hashes (downsizing to flat array / runthrough implementation rather than hash size). At least I was close.
Downsizing is a fairly common problem in programming which results in what is somewhat akin to a game of Jenga.

JavaScript optimization: Caching math functions globally

I'm currently doing some "extreme" optimization on a JavaScript game engine I'm writing. And I have noticed I use math functions a lot! And I'm currently only caching them locally per function I use them in. So I was going to cache them at the global level in the window object using the below code.
var aMathFunctions = Object.getOwnPropertyNames(Math);
for (var i in aMathFunctions)
{
window[aMathFunctions[i]] = Math[aMathFunctions[i]];
}
Are there any major problems or side effects with this? Will I be overwriting existing functions in window, and will I be increasing my memory footprint dramatically? Or what else may go wrong?
EDIT: Below is an excerpt on reading I have done about JavaScript optimization that has lead me to try this.
Property Depth
Nesting objects in order to use dot notation is a great way to
namespace and organize your code. Unforutnately, when it comes to
performance, this can be a bit of a problem. Every time a value is
accessed in this sort of scenario, the interpreter has to traverse the
objects you've nested in order to get to that value. The deeper the
value, the more traversal, the longer the wait. So even though
namespacing is a great organizational tool, keeping things as shallow
as possible is your best bet at faster performance. The latest
incarnation of the YUI Library evolved to eliminate a whole layer of
nesting from its namespacing. So for example, YAHOO.util.Anim is now
Y.Anim.
Reference: http://www.phpied.com/extreme-javascript-optimization/
Edit: Should not matter anymore in Chrome due to this revision; perhaps caching is now even faster.
Don't do it, it's much slower when using global functions.
http://jsperf.com/math-vs-global
On Chrome:
sqrt(2); - 12,453,198 ops/second
Math.sqrt(2); - 542,475,219 ops/second
As for memory usage, globalizing it wouldn't be bad at all on the other hand. You just create another reference; the function itself will not be copied.
I am actually amazed it is faster for me on Mac OS X and Firefox 5 talking 5-8 ms difference in 50000 iterations.
console.time("a");
for (var i=0;i<50000;i++) {
var x = Math.floor(12.56789);
}
console.timeEnd("a");
var floor = Math.floor;
console.time("b");
for (var i=0;i<50000;i++) {
var y = floor(12.56789);
}
console.timeEnd("b");
I see only one real bonus if it will reduce the footprint of the code. I have not tested any other browsers so it may be a boost in one and slower in others.
Would it cause any problems? I don't see why it would unless you have things in global scope with those names. :)

Categories