According to tests with jsperf a for loop in javascript with this form:
for (var i = 0, item; item = itemsArray[i++];){
item = Math.random();
}
is several orders of magnitude faster than a typical for loop, even in older browsers like IE8. I have not been able to find another reference to this loop construct and am wondering why is it so much faster?
Also, I've looked through the sources of some javascript libraries, like jQuery and Knockoutjs and they do not use this construct in their code.
Which leads me to be suspicious. If this form of looping is so much faster, why don't popular libraries, written by people much smarter, use it?
Am I missing something where this loop is not as good as it looks on the surface?
Am I missing something where this loop is not as good as it looks on the surface?
Everytime a single item from your itemsArray is falsy, your condition fails to do what is expected.
Actually that's also the reason why your test says it would be much faster, that's just because it doesn't even do the first iteration as itemsArray[0] = 0. An updated jsperf which iterates [1..1001] shows that the for loops perform quite similar, yours actually being one of the little slower ones.
why don't popular libraries, written by people much smarter, use it?
They focus on good algorithms, readability, usability, cross-browser-support (and of course correctness), not about micro-optimisation.
There are less operations i guess, using .length in loop is slow, it does not check for stuff like i < len either. It does only verify item return true to go on looping (which may cause severe issue if any of your item exist but return false)
I guess library don't use this cause it's inelegant, less readable, and may cause the issue mentioned above ? It's not a bullet-proof loop.
Related
Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).
Usually when I have to parse number in javascript I use code like
var x="99"
var xnumber= x-0
instead of
var xnumber= parseInt(x)
Is there any problem in using this Code ( in the performance or the structure ) and I want to know if there is any problem
Using the "x - 0" method is going to be significantly faster in most browsers.
Here's a JSPerf that shows the performance difference.
You can do you own A/B performance testing using JSPerf.com
However, you may still want to use parseInt() in some cases, because it's a little clearer. Although, truthfully, any experienced javascript developer isn't going to have any trouble understanding the faster way.
If the line of code is only going to run once every half second or so (or whenever the user types a letter), you can use parseInt without worrying.
However, if this bit of code is in a loop that runs a few thousand times or more, you should definitely use x - 0.
Both ways should work but it is in my opinion "cleaner" to parse any String with the function which is made for this and not with this little trick.
You are also less likely to run into conversion issues and should you write code in other languages you are in general better of with the way the developer intended to use. Many other languages will not allow your first way of "casting"
performance wise you wont see much of a difference, but it's not very readable. I tend to believe parseInt("4", 10) is the right solution because it's inline with how you parse other values in javascript (i.e. parseFloat("4.0")) so you're keeping more consistent by sticking with one method of parsing numbers.
I don't believe it will perform better or worse. Performance optimisations like that one are generally negligible, and can be inconsistent across various platforms. However, the former method using parseInt gives a much better idea of your intentions to other programmers. Unless you have a very good reason to optimize this statement, and you are sure it really runs faster in your target environment, you are recommended to stick to the more readable version (this case, parseInt).
Update: I recommend reading this thread about micro-optimisations: https://softwareengineering.stackexchange.com/questions/99445/is-micro-optimisation-important-when-coding/
In JavaScript, I've an object with an array, and a method wich gets a slice of that array and a concatenation with another array.
If that method is run several times in the same function to return always the same value, does the performance will be faster after of the first run (due to the result will be cached in CPU cache)?
Of course no is the only answer would be here. Because the purpose of a function is to take some parameters and return a value. All the parameters might be different each time you call the function and even if they are the same, the result might be different, also event if you call the function and each time it returns the same result, because it might do an action or cause some modifications in other places, caching the result by the parser would be a buggy idea.
Cheers
Maybe.
There are quite a lot of levels of cache to look at, here. Your processor alone has more than a single cache. Basically, though, you simply can't say much about those. They might have different sizes, things like what other things you do in the mean time and how long the function is all influence this. It should also be noted that this does not work at the level of what you call a function call when in Javascript, but at a much lower level. However, it might at times mean that some time can be shaved off of the execution time of the function. I don't think it's too likely or noticeable, but in the end, you can't really say much about it.
Finally, there is javascript itself. Per the standard, it doesn't have such caching. However, the standard doesn't prohibit strange caching either, so there might one day be a browser that does it like that (I don't believe there is one right now.)
In the end, the basic answer is: no not in a noticeable way. However, there might actually be a speed gain due to the cache, it's always hard to say.
I guess the general answer to this question is NO!
There is no caching in JavaScript or CPU caching that you can control with JavaScript. If you need to cache something / increase performance, i will have to program that yourself.
See this small example:
http://jsperf.com/cachingjq
No, you'd have to manually (or with a framework) memoize the results: Javascript Memoization Explanation?
I find the practice of caching an array's length property inside a for loop quite distasteful. As in,
for (var i = 0, l = myArray.length; i < l; ++i) {
// ...
}
In my eyes at least, this hurts readability a lot compared with the straightforward
for (var i = 0; i < myArray.length; ++i) {
// ...
}
(not to mention that it leaks another variable into the surrounding function due to the nature of lexical scope and hoisting.)
I'd like to be able to tell anyone who does this "don't bother; modern JS JITers optimize that trick away." Obviously it's not a trivial optimization, since you could e.g. modify the array while it is being iterated over, but I would think given all the crazy stuff I've heard about JITers and their runtime analysis tricks, they'd have gotten to this by now.
Anyone have evidence one way or another?
And yes, I too wish it would suffice to say "that's a micro-optimization; don't do that until you profile." But not everyone listens to that kind of reason, especially when it becomes a habit to cache the length and they just end up doing so automatically, almost as a style choice.
It depends on a few things:
Whether you've proven your code is spending significant time looping
Whether the slowest browser you're fully supporting benefits from array length caching
Whether you or the people who work on your code find the array length caching hard to read
It seems from the benchmarks I've seen (for example, here and here) that performance in IE < 9 (which will generally be the slowest browsers you have to deal with) benefits from caching the array length, so it may be worth doing. For what it's worth, I have a long-standing habit of caching the array length and as a result find it easy to read. There are also other loop optimizations that can have an effect, such as counting down rather than up.
Here's a relevant discussion about this from the JSMentors mailing list: http://groups.google.com/group/jsmentors/browse_thread/thread/526c1ddeccfe90f0
My tests show that all major newer browsers cache the length property of arrays. You don't need to cache it yourself unless you're concerned about IE6 or 7, I don't remember exactly. However, I have been using another style of iteration since those days since it gives me another benefit which I'll describe in the following example:
var arr = ["Hello", "there", "sup"];
for (var i=0, str; str = arr[i]; i++) {
// I already have the item being iterated in the loop as 'str'
alert(str);
}
You must realize that this iteration style stops if the array is allowed to contain 'falsy' values, so this style cannot be used in that case.
First of all, how is this harder to do or less legible?
var i = someArray.length;
while(i--){
//doStuff to someArray[i]
}
This is not some weird cryptic micro-optimization. It's just a basic work avoidance principle. Not using the '.' or '[]' operators more than necessary should be as obvious as not recalculating pi more than once (assuming you didn't know we already have that in the Math object).
[rantish elements yoinked]
If someArray is entirely internal to a function it's fair game for JIT optimization of its length property which is really like a getter that actually counts up the elements of the array every time you access it. A JIT could see that it was entirely locally scoped and skip the actual counting behavior.
But this involves a fair amount of complexity. Every time you do anything that mutates that Array you have to treat length like a static property and tell your array altering methods (the native code side of them I mean) to set the property manually whereas normally length just counts the items up every time it's referenced. That means every time a new array-altering method is added you have to update the JIT to branch behavior for length references of a locally scoped array.
I could see Chrome doing this eventually but I don't think it is yet based on some really informal tests. I'm not sure IE will ever have this level of performance fine-tuning as a priority. As for the other browsers, you could make a strong argument for the maintenance issue of having to branch behavior for every new array method being more trouble than its worth. At the very least, it would not get top priority.
Ultimately, accessing the length property every loop cycle isn't going to cost you a ton even in the old browsers for a typical JS loop. But I would advise getting in the habit of caching any property lookup being done more than once because with getter properties you can never be sure how much work is being done, which browsers optimize in what ways or what kind of performance costs you could hit down the road when somebody decides to move someArray outside of the function which could lead to the call object checking in a dozen places before finding what it's looking for every time you do that property access.
Caching property lookups and method returns is easy, cleans your code up, and ultimately makes it more flexible and performance-robust in the face of modification. Even if one or two JITs did make it unnecessary in circumstances involving a number of 'ifs', you couldn't be certain they always would or that your code would continue to make it possible to do so.
So yes, apologies for the anti-let-the-compiler-handle-it rant but I don't see why you would ever want to not cache your properties. It's easy. It's clean. It guarantees better performance regardless of browser or movement of the object having its property's examined to an outer scope.
But it really does piss me off that Word docs load as slowly now as they did back in 1995 and that people continue to write horrendously slow-performing java websites even though Java's VM supposedly beats all non-compiled contenders for performance. I think this notion that you can let the compiler sort out the performance details and that "modern computers are SO fast" has a lot to do with that. We should always be mindful of work-avoidance, when the work is easy to avoid and doesn't threaten legibility/maintainability, IMO. Doing it differently has never helped me (or I suspect anybody) write the code faster in the long term.
I'm into selectors performance lately, and it's bugging me that the browsers which currently implements the Selectors API don't use document.getElementById when a simple #id is being passed.
The performance penalty is huge, so library authors continue to implement their own way around that.
Any ideas?
After making my comment above, I decided to follow through:
From Node.cpp in the Chromium source
if (strictParsing && inDocument() && querySelectorList.hasOneSelector() && querySelectorList.first()->m_match == CSSSelector::Id) {
Element* element = document()->getElementById(querySelectorList.first()->m_value);
if (element && (isDocumentNode() || element->isDescendantOf(this)) && selectorChecker.checkSelector(querySelectorList.first(), element))
return element;
return 0;
}
So it does map on getElementById, it is just that parsing the string looking for selectors is an expensive operation.
Tbh. the performance penalty is insignificant... I really doubt you're going to do 100.000 id lookups per second, if you do, then QSA performance is actually the last thing you should look at.
As to why, adding an extra if/else might make id lookups more performant, but then other css selectors will be a fraction (still insignificant) slower. Why optimize QSA to deal with id lookups when there's a specialist method to do exactly that a lot faster anyways.
In any case, browsers are aiming for speed and leaving out stuff like this makes the overall performance charts look a lot better. In this benchmark race it's REALLY about every single millisecond, but for the developers... please be realistic, other benchmarks are more important, QSA performance shouldn't really be a factor anymore.
As for developer convenience, it works, it's still so fast you won't notice it in actual applications (I challenge you to show me where it's IS VISUALLY noticable whilst still being a sane program ;o).
Maybe because if they did that, they would have to add a check to see if its a simple id query (no modifiers) which would slow down every other query? It might not be a huge performance hit to do the test, but its difficult to speak for other developers.
I think if you are worried about it you can add a func like getObByID that checks for document,getElementById, uses it if it exists, else uses the selector. Maybe the developers don't feel the need to add this type of abstraction when you can easily do it yourself, and it would be up to developers to remember to use it, and increase learning curve.
I was comparing getElementById() and querySelector() and found that someone has already done performance comparisons and calculations.
It certainly looks as though querySelector() wins every time... and by a considerable amount.