Which JS benchmark site is correct? - javascript

I created a benchmark on both jsperf.com and jsben.ch, however, they're giving substantially different results.
JSPerf: https://jsperf.com/join-vs-template-venryx
JSBench: http://jsben.ch/9DaxR
Note that the code blocks are exactly the same.
On jsperf, block 1 is "61% slower" than the fastest:
On jsbench, block 1 is only 32% slower than the fastest: ((99 - 75) / 75)
What gives? I would expect benchmark sites to give the same results, at least within a few percent.
As it stands, I'm unable to make a conclusion on which option is fastest because of the inconsistency.
EDIT
Extended list of benchmarks:
https://jsperf.com/join-vs-template-venryx
https://jsbench.me/f3k3g71sg9
http://jsbench.github.io/#7f03c3d3fdc9ae3a399d0f2d6de3d69f
https://run.perf.zone/view/Join-vs-Template-Venryx-1512492228976
http://jsben.ch/9DaxR
Not sure which is the best, but I'd skip jsben.ch (the last one) for the reasons Job mentions: it doesn't display the number of runs, the error margin, or the number of operations per second -- which is important for estimating absolute performance impact, and enabling stable comparison between benchmark sites and/or browsers and browser versions.
(At the moment http://jsbench.me is my favorite.)

March 2019 Update: results are inconsistent between Firefox and Chrome - perf.zone behave anomalously on Chrome, jsben.ch behaves anomalously on Firefox. Until we know exactly why, the best you can do is benchmark on multiple websites (but I'd still skip jsben.ch, the others give you a least some error margin and stats on how many runs were taken, and so on)
TL;DR: running your code on perf.zone and on jsbench.github.io (see here and here), the results closely match jsperf. Personally, and for other reasons than just these results, I trust these three websites more than jsben.ch.
Recently, I tried benchmarking the performance of string concatenation too, but in my case it's building one string out of 1000000+ single character strings (join('') wins for numbers this large and up, btw). On my machine the jsben.ch timed out instead of giving a result at all. Perhaps it works better on yours, but for me that's a big warning sign:
http://jsben.ch/mYaJk
http://jsbench.github.io/#26d1f3705b3340ace36cbad7b24055fb
https://run.perf.zone/view/join-vs-concat-when-dealing-with-very-long-lists-of-single-character-strings-1512490506658
(I can't be bothered to ever have to deal with jsperf's not all tests inserted again, sorry)
At the moment I suspect but can't prove that perf.zone has slightly more reliable benchmark numbers:
when optimising lz-string I used jsbench.github.io for a very long time, but at some point I noticed there were impossibly large error margins for certain types of code, over 100%.
running benchmarks on mobile is fine with jsperf.com and perf.zone, but jsbench.github.io is kinda janky and the CSS breaks while running tests.
Perhaps these two things are related: perhaps the method that jsbench.github.io uses to update the DOM introduces some kind of overhead that affects the benchmarks (they should meta-benchmark that...).
Note: perf.zone is not without its flaws. It sometimes times out when trying to save a benchmark (the worst time to do so...) and you can only fork your own code, not edit it. But the output still seems to be more in line with jsperf, and it has a really nice "quick" mode for throwaway benchmarking

Sorry for a bump but might be interesting for others running into this in search results.
I can't speak for others but jsbench.me just uses benchmark.js for testing. Its a single-page React app meaning it runs completely on your browser and your engine of choice, so results should be consistent within single browser. You can run it in Firefox or mobile and results will be different of course. But absolutely nothing related to testing is on the server, other than AWS DynamoDB to store results.
P.S. I'm the author, so only passion project of individual. Currently doesnt cost me anything as its optimized for serverless and it fits AWS free tier. Anount of work on it is proportional to number of users :)

AFAIK one issue is that various JavaScript engines optimize vastly differently based on the environment.
I have a test of the exact same function that produces different results based on where the function is created. In other words, for example, in one test it's
const lib = {}
lib.testFn = function() {
....
}
And in other it's
const lib = {
testFn: function() {
....
},
};
and in another it's
function testFn() {
....
}
const lib = {}
lib.testFn = testFn
and there's a >10% difference in results for a non-trivial function in the same browser and different results across browsers.
What this means is no JavaScript benchmark is correct because how that benchmark runs it's tests, as in the test harness itself, affects the results. The harness for example might XHR the test script. Might call eval. Might run the test in a worker. Might run the test in an iframe. And the JS engine might optimize all of those differently.

Related

Performance difference between lodash "get" and "if else" clauses

Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).

Does creating shortcuts in JS slows performance down

By shortcut I mean expression like: var do = some.thing.do
For example I have this code with shortcuts:
MyModule3 = {
doSmth3 : function() {
var doSmth1 = MyModule1.doings.doSmth1,
doSmth2 = MyModule2.doings.doSmth2;
doSmth1();
doSmth2();
}
}
And this one without shortcuts:
MyModule3 = {
doSmth3 : function() {
MyModule1.doings.doSmth1();
MyModule2.doings.doSmth2();
}
}
Will the first code be slower than the second one?
I guess the answer is "yes, the first one is slower, but there is no reason to worry about it, because difference in performance will not be significant".
But what if I have hundreds of functions with shortcuts? Can this difference become critical at some moment?
=======
I tried to test it, but I've got strange result.
When I run:
testWithShortcuts();
testWithoutShortcuts();
then the first test runs faster than the second one.
But If I run it in reversed order:
testWithoutShortcuts();
testWithShortcuts();
then again first test runs faster.
I'm not familiar with performance testing. Maybe my test does not actually do what it is supposed to do.
TL;DR
No.
Real answer
Yes however, the measerable difference is so small that the concern is non-sensical in 99% of the time.
Discussion
To drive home the point it is better to optimise for maintainability, understandability, and clearity then it is to worry about a few extra pointers in memory. Infact once compiled to machinecode they are the same pointer it only impact would be in the JIT step which these days are measured in fractions of a millisecond.
It is better to code for readbility and only if the app starts to slow a little run some profiling find the slow parts and optimise that part. Chances are the slow issue is something completly different then the extra alias you made.
Nope, that will never become critical. I'm tempted to take Paul Irish's words and say you're not even allowed to care about the performance of something like this ;)
Yes, the code where you create the shortcuts will be slightly slower, but the difference is not significant.
For the difference to be significant, assigning the value to the variable would have to be a reasonably large part of what you are doing. Just calling a function is quite more expensive than assigning a value to a variable (even if the function doesn't do much), so creating the shortcut will never be significant.

Why should I not not slice on the arguments object in javascript? [duplicate]

In the bluebird docs, they have this as an anti-pattern that stops optimization.. They call it argument leaking,
function leaksArguments2() {
var args = [].slice.call(arguments);
}
I do this all the time in Node.js. Is this really a problem. And, if so, why?
Assume only the latest version of Node.js.
Disclaimer: I am the author of the wiki page
It's a problem if the containing function is called a lot (being hot). Functions that leak arguments are not supported by the optimizing compiler (crankshaft).
Normally when a function is hot, it will be optimized. However if the function contains unsupported features like leaking arguments, being a hot function doesn't help and it will continue running slow generic code.
The performance of an optimized function compared to an unoptimized one is huge. For example consider a function that adds 3 doubles together: http://jsperf.com/213213213 21x difference.
What if it added 6 doubles together? 29x difference Generally the more code the function has, the more severe the punishment is for that function to run in unoptimized mode.
For node.js stuff like this in general is actually a huge problem due to the fact that any cpu time completely blocks the server. Just by optimizing the url parser that is included in node core (my module is 30x faster in node's own benchmarks), improves the requests per second of mysql-express from 70K rps to 100K rps in a benchmark that queries a database.
Good news is that node core is aware of this
Is this really a problem
For application code, no. For almost any module/library code, no. For a library such as bluebird that is intended to be used pervasively throughout an entire codebase, yes. If you did this in a very hot function in your application, then maybe yes.
I don't know the details but I trust the bluebird authors as credible that accessing arguments in the ways described in the docs causes v8 to refuse to optimize the function, and thus it's something that the bluebird authors consider worth using a build-time macro to get the optimized version.
Just keep in mind the latency numbers that gave rise to node in the first place. If your application does useful things like talking to a database or the filesystem, then I/O will be your bottleneck and optimizing/caching/parallelizing those will pay vastly higher dividends than v8-level in-memory micro-optimizations such as above.

What's the consensus on using Function.caller in JavaScript?

A while ago I read that you shouldn't use Function.caller inside a function because it makes the function non-inlineable. To test this assertion I wrote the following benchmark:
Does Function.caller affect preformance? ยท jsPerf.
The results prove that using Function.caller indeed makes a function execute slower than normal:
In Opera it is 16% slower.
In Chrome it is 80% slower.
In Firefox it is 100% slower.
Hence my question is this: what's the concensus on using Function.caller in JavaScript? Is it alright to use it sparingly? Should it be shunned altogether?
As far as I know, dynamically inspecting the execution stack with caller/callee/etc is not allowed in strict mode so you can kind of see that as a consensus to avoid this feature if possible.
Anyway, why do you even want to use Function.caller in the first place? It makes your code depend on something that usually doesnt matter (the call stack) and data gets passed around implicitly instead of via explicit arguments. The only real use I ever saw for this kind of feature is printing stack traces and in that case you usually can pay the performance cost or can get around it with a debugger.
If performance is your only concern, it's probably fine. While massively slower than not referencing caller, my machine can still do that 1.6 million times per second.
"Slow" can be a relative term. If you only need to call it rarely, it does it's magic fast enough most of the time. I just wouldn't put it in a big loop, iterated on every animation frame in my game.
However, this magic property has other problems. There are are more concerns than just performance, as #missingno points out.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Categories