I'm doing some work involving processing an insane amount of data in browser. As a result I'm trying to optimize everything down to the nuts and bolts. I don't need anyone telling me that I'm wasting my time or that premature optimization is the root of all evil.
I would just like to know if anyone that understands how JS works would know whether or not a lesser than boolean runs faster than a lesser than equals boolean. What I mean by that is, would:
return (i<2? 0:1)
Be parsed and run faster than:
return (i<=1? 0:1)
In this example we're assuming that i is an integer. Thanks.
JavaScript standard desribes the steps that needs to be taken in order to evaluate those expressions. You can take a look at ECMAScript 2015 Language Specification, section 12.9.3.
Be aware that even if there is slightly difference between steps of those two operation, other stuff in your application will have much more influence on performance that these simple operations that you cannot control in JavaScript. For example work of garbage collector, just-in-time compiler, ...
Even if you try measuring time in JavaScript, this will not work as just taking time stamps has much bigger influence on the performance than the actual expression you want to measure. Also the code that you wrote might not be the one which is really evaluated as some preoptimizations might me taken by the engine prior to actual running the code.
I wouldn't call this micro-optimisation, but rather nano-optimisation.
Cases are so similar you'll most likely have a measure precision below the gain you can expect...
(Edit)
If this code is optimised, the generated assembly code will just change from JAto JAE (in (x86) , and they use the same cycle count. 0,0000% change.
If it is not, you might win one step within a selectof the engine...
The annoying thing being that it makes you miss the larger picture : unless i'm wrong, you need a branch here, and if you're that worried about time, the statistical distribution of your input will influence WAY more the execution time. (but still not that much...)
So walk a step back and compare :
if (i<2)
return 0;
else
return 1;
and :
if (i>=2)
return 1;
else
return 0;
You see that for ( 100, 20, 10, 1, 50, 10) (1) will branch way more and for (0, 1, 0, 0, 20, 1), (2) branches more.
That will make much more difference... that might just as well be very difficult to measure !!!
(As a question left to the reader, i wonder how return +(i>1) compiles, and if there's a trick to avoid branching... )
(By the way i'm not against early optimisation, i even posted some advices here, if it might interest you : https://gamealchemist.wordpress.com/2016/04/15/writing-efficient-javascript-a-few-tips/ )
I have created a fiddle using performance.now API and console.time API's
Both the API says how much ms of the time was taken to execute the functions/loops.
I feel the major difference is the result, performance.now gives more accurate value i.e. upto 1/1000th ms.
https://jsfiddle.net/ztacgxf1/
function lessThan(){
var t0 = performance.now();
console.time("lessThan");
for(var i = 0; i < 100000; i++){
if(i < 1000){}
}
console.timeEnd("lessThan");
var t1 = performance.now();
console.log("Perf -- >>" + (t1-t0));
}
function lessThanEq(){
var t0 = performance.now();
console.time("lessThanEq")
for(var i = 0; i < 100000; i++){
if(i <= 999){}
}
console.timeEnd("lessThanEq");
var t1 = performance.now();
console.log("Perf -- >>" + (t1-t0));
}
lessThan()
lessThanEq()
I haven't much difference. May be iterating more may give different result.
Hope this helps you.
I am working on a very time-sensitive application that uses key presses for user input. As I am talking milliseconds here, I went ahead and tried a version like this:
function start() {
//stim.style.display = "block";
rt_start = new Date().getTime();
response_allowed = 1;
}
function end() {
var t = rt_end - rt_start;
//stim.style.display = "none";
log.innerHTML = t;
i++;
if (i < iterations) {
setTimeout('start();', 1000);
}
}
var rt_start;
var rt_end;
var iterations = 100;
var i = 0;
var response_allowed = 0;
var stim;
var log;
$(document).ready(function() {
document.onkeydown = function(e) {
if (response_allowed == 1) {
rt_end = new Date().getTime();
response_allowed = 0;
end();
}
};
stim = document.getElementById('stim');
log = document.getElementById('log');
start();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<div id="log"></div>
<img src="https://www.gravatar.com/avatar/cfefd93404e6b0eb3cde02b4b6df4e2b?s=128&d=identicon&r=PG&f=1" id="stim" />
And it works fine, usually sub-5ms timers (just holding down a key). But as soon as I modify the code to display the image (uncommenting the two lines), this slows down a lot to about 30ms.
Can someone point me into the direction why exactly this is the case and how to possibly avoid this additional delay?
Thanks
I would recommend using a DOMHighResTimeStamp where available (with a polyfill for browsers that don't provide it).
It's a high-resolution timestamp (designed with accurate measurement in mind) to be used (e.g.) with the Navigation Timing and Web Performance APIs (search for this in the Mozilla Developer Network, as I can't share more than two links within a single post).
The quick way to get a DOMHighResTimeStamp - much like you do with var ts = new Date().getTime(); to get a regular millisecond timestamp - is:
var ts = performance.now();
As I said above, take a look at the Web Performance API at MDN. It will be very helpful if your application is really time-sensitive.
EDIT:
About your snippet, it seems to me that if you hold a key down, you will be always limited to the resolution of the keydown event (which fires continuously, but not every milissecond). You can easily see this behavior if you press a character key down (continuously) with a text editor and check for how many times per second the character is written. This, I guess, is controlled via an OS setting.
You are also limited to the "drift" associated with setTimeout/setInterval. You see, setTimeout queues something for execution after a given delay, but it does not guarantee timely execution. It's a "best effort" scenario and, if the browser is busy doing something, it will drift significantly. Meaning: if you use a setTimeout to re-enable a response_allowed variable after 1 second, you can expect it to re-enable it after "about" (but not exactly) 1 second.
http://jsfiddle.net/6L2pJ/
var test = function () {
var i,
a,
startTime;
startTime = new Date().getTime();
for (i = 0; i < 3000000000; i = i + 1) {
a = i % 5;
}
console.log(a); //prevent dead code eliminiation
return new Date().getTime() - startTime;
};
var results = [];
for (var i = 0; i < 5; i = i + 1) {
results.push(test());
}
for (var i = 0; i < results.length; i = i + 1) {
console.log('Time needed: ' + results[i] + 'ms');
}
Results in:
First execution:
Time needed: 13654ms
Time needed: 32192ms
Time needed: 33167ms
Time needed: 33587ms
Time needed: 33630ms
Second execution:
Time needed: 14004ms
Time needed: 32965ms
Time needed: 33705ms
Time needed: 33923ms
Time needed: 33727ms
Third execution:
Time needed: 13124ms
Time needed: 30706ms
Time needed: 31555ms
Time needed: 32275ms
Time needed: 32752ms
What is the reason for the jump from first to second row?
My setup:
Ubuntu 13.10
Google Chrome 36.0.1985.125 (Mozilla Firefox 30.0 giving same kind of results)
EDIT:
I modified the code leaving it semantically the same but inlining everything. Interestingly it does not only speed up the execution significantly but it also removes the phenomena that I described above to a great extent. A slight jump is still noticable though.
Modified code:
http://jsfiddle.net/cay69/
Results:
First execution:
Time needed: 13786ms
Time needed: 14402ms
Time needed: 14261ms
Time needed: 14355ms
Time needed: 14444ms
Second execution:
Time needed: 13778ms
Time needed: 14293ms
Time needed: 14236ms
Time needed: 14459ms
Time needed: 14728ms
Third execution:
Time needed: 13639ms
Time needed: 14375ms
Time needed: 13824ms
Time needed: 14125ms
Time needed: 14081ms
After a bit testing, I think I have pin-pointed what may be causing the difference. It must have something to do with type I think.
var i,
a = 0,
startTime;
var a = 0 gives me a uniformed result with an overall faster performance, on the other hand var a = "0" gives me the same result as yours: the first one is somewhat faster.
I have no clue why this happens.
The following is only a pseudo-answer, which I hope may become updated by the community. Originally, it was going to be a comment, but it became too lengthy, too quickly and thus needed to be posted as an answer.
Interesting/Findings
In running a few tests, I couldn't find any correlation to console.log being used. Testing in OSX Safari, I found that the problem existed with and without printing to the console.
What I did notice was a pattern. There was an inflection point as I approached 2147483648 (2^31) from your initial starting value. This most likely depends on the user's environment, but I found an inflection point around 2147485000 (try numbers above and below; 2147430000..2147490000). Somewhere around this number is where the timings became more uniform.
I was really hoping it would be 2^31 [exactly], since that number is also significant in computer terms; it is the upper bound of a long integer. However, my tests resulted to a number that was slightly more than that (for reasons unknown at this point). Other than making sure the swap file wasn't being used, I didn't do any other memory analysis.
EDIT from asker:
On my setup it actually is exactly 2^31 where the jump occurs. I tested it by playing around with with the following code:
http://jsfiddle.net/8w24v/
This information may support Derek's initialization observation.
This is just a thought and might be a stretch:
The LLVM or something else may be performing some up-front optimizations. Perhaps the loop variable starts out as an int and then after a pass or two the optimizer notices the variable becomes a long. In trying to optimize, it tries to set it as a long up front, only in this case it's not an optimization that saves time, since working with a regular integer performs better than the conversion cost from int to long.
I wouldn't be surprised if the answer was somewhere in the ECMAScript documentation :)
It appears that Google Chrome is breaking up your script execution into chunks, and giving processing time to other processes. Its not noticeable until your execution hits around 600ms per function call. I tested with a smaller subset of data (300000000 if I remember correctly.)
I'm operating a listener in CasperJS that visits a few private websites & waits for certain configurations of data. Right now, this operates adequately, but not optimally, on a numbered For loop, along these lines:
for (var p = 20000; p-- > 0;) {
// ... c.900 lines of code ....
}
While loops & Do-While loops don't work, due to scoping issues with several instances of Casper.then.
What I'm really looking to do is cron the code over a day timer, to operate between 6am and midnight, something like:
// as global variable
function militarytime () {
var currentTime = new Date();
var hours = currentTime.getHours();
var minutes = currentTime.getMinutes();
var military = (hours*100)+minutes
return military;
}
var p = militarytime();
// then within code,
for (t=p; (t=p) && (p>600); t++)
This particular way of doing it (and I've tried many) just hangs in CasperJS.
The code has been operating, suboptimally, in a production environment for several weeks, and I've been searching stackoverflow & casperjs/api during that time to no avail. Any suggestions?
Thanks in advance.
This may be a stupid question, but are you firing the run() function? I've forgotten to include that which caused my program to just hang, generally it's the only reason I've seen for it to hang without error.
CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);