How to calculate runtime speed in standalone javascript program? - javascript

I am doing some JavaScript exercise and thinking about ways to improve my solution(algorithm) to the exercise. I am thinking of calculating runtime speed after tweaking the code so I will know how much the speed is improved. I searched and found this method and think I can do the same. Here is what I did,
var d = new Date();
var startTime = d.getTime();
var endTime;
function testTargeAlgorithm(){
....
....
}
testTargetAlgorithm();
endTime = d.getTime();
console.log(endTime-startTime);
It's a very simple algorithm so I don't expect there will be notable difference between the time. But if millisecond cannot measure the speed improvement, what else can I do?

You can use performance.now() if the engine supports it. This gives a time in milliseconds, with sub-millisecond precision, since the page loaded or app started.
performance.now() // 26742.766999999956
I know Chrome supports it, but not sure about other browsers, node.js, or other engines standalone js engines.
Or you can run your code many times in a loop, and measure the total time taken.

Run the same function again and again.
var startTime = (new Date()).getTime();
for(var i=0;i<1000;i++) {
testTargeAlgorithm()
}
var endTime = (new Date()).getTime();
console.log(endTime-startTime);
edited to reflect suggestions, thanks Marcel

I ended up using process.hrtime() to provide nanosecond precision for measuring runtime performance. Note this method only works with Node.js. In Chrome & Firefox, you can use performance.now().
Even running same algorithm/function, the returned time difference still varies (in nanosecond unit tho) presumably due to CPU usage and other unknown effects, so it is suggested to run a good number of times and calculate the average. For example:
function calAvgSpeed(timesToRun, targetAlgorithm){
var diffSum = 0;
for(var i = 1; i <= timesToRun; i++){
var startTime = process.hrtime();
targetAlgorithm();
var diff = process.hrtime(startTime);
diffSum += diff[1];
}
return Math.floor(diffSum / times);
}

Related

Boolean vs !!. What is better for performance in JS? [duplicate]

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

Which is the more efficient way to shuffle an array in JavaScript? [duplicate]

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

Measuring page load timings using JavaScript

I have created a script in JavaScript that is injected into our Ext JS application during automated browser testing. The script measures the amount of time taken to load the data in our grids.
Specifically, the script polls each grid, looks to see if there is a first row or a 'no data' message, and once all grids have satisfied this condition the script records the value between Date.now() and performance.timing.fetchStart, and treats this as the time the page took to load.
This script works more or less as expected, however when compared with human measured timings (Google stopwatch ftw), the time reported by this test is consistently around 300 milliseconds longer than when measured by stopwatch.
My questions are these:
Is there a hole in this logic that would lead to incorrect results?
Are there any alternative and accurate ways to achieve this
measurement?
The script is as follows:
function loadPoll() {
var i, duration,
dataRow = '.firstRow', noDataRow = '.noData',
grids = ['.grid1', '.grid2', '.grid3','.grid4', 'grid5', 'grid6', 'grid7'];
for (i = 0; i < grids.length; ++i) {
var data = grids[i] + ' ' + dataRow,
noData = grids[i] + ' ' + noDataRow;
if (!(document.querySelector(data) || document.querySelector(noData))) {
window.setTimeout(loadPoll, 100);
return;
}
}
duration = Date.now() - performance.timing.fetchStart;
window.loadTime = duration;
}
loadPoll();
Some considerations:
Although I am aware that human response time can be slow, I am sure
that the 300 millisecond inconsistency is not introduced by the human
factor of using Google stopwatch.
Looking at the code it might appear that the polling of multiple
elements could lead to the 300 ms inconsistency, however when I
change the number of elements being monitored from 7 to 1, there
still appears to be a 300 ms surplus in the time reported by the
automated test.
Our automated tests are executed in a framework controlled by
Selenium and Protractor.
Thanks in advance if you are able to provide any insight to this!
If you use performance.now() the time should be accurate to 5 microseconds. According to MDN:
The performance.now() method returns a DOMHighResTimeStamp, measured
in milliseconds, accurate to five thousandths of a millisecond (5
microseconds).
The returned value represents the time elapsed since the time origin
(the PerformanceTiming.navigationStart property).
If I were you I would revise my approach to how the actual measuring of the time is captured. Rather than evaluating the time for each loadPoll() call, you can evaluate how many calls you can perform for a given period of time. In other words you can count the number of function iterations for a longer period of time, eg 1000 milliseconds. Here's how this can be done:
var timeout = 1000;
var startTime = new Date().getTime();
var elapsedTime = 0;
for (var iterations = 0; elapsedTime < timeout; iterations++) {
loadPoll();
elapsedTime = new Date().getTime() - startTime;
}
// output the number of achieved iterations
console.log(iterations);
This approach will give you more consistent and accurate time estimates. Faster systems will simply achieve a greater number of iterations. Keep in mind that setInterval()/setTimeout() are not perfectly precise and for really small interval timers these functions may give you invalid results due to garbage collection, demands from events and many other things that can run in parallel while your code is being executed.

Monotonically increasing time in JavaScript?

What’s the best way to get monotonically increasing time in JavaScript? I’m hoping for something like Java’s System.nanoTime().
Date() obviously won’t work, as it’s affected by system time changes.
In other words, what I would like is for a <= b, always:
a = myIncreasingTime.getMilliseconds();
...
// some time later, maybe seconds, maybe days
b = myIncreasingTime.getMilliseconds();
At best, even when using the UTC functions in Date(), it will return what it believes is the correct time, but if someone sets the time backward, the next call to Date() can return a lesser value. System.nanoTime() does not suffer from this limitation (at least not until the system is rebooted).
Modification: [2012-02-26: not intended to affect the original question, which has a bounty]
I am not interested knowing the “wall time”, I’m interested in knowing elapsed time with some accuracy, which Date() cannot possibly provide.
You could use window.performance.now() - since Firefox 15, and window.performance.webkitNow() - Chrome 20]
var a = window.performance.now();
//...
var delay = window.performance.now() - a;
You could wrap Date() or Date.now() so as to force it to be monotonic (but inaccurate). Sketch, untested:
var offset = 0;
var seen = 0;
function time() {
var t = Date.now();
if (t < seen) {
offset += (seen - t);
}
seen = t;
return t + offset;
}
If the system clock is set back at a given moment, then it will appear that no time has passed (and an elapsed time containing that interval will be incorrect), but you will at least not have negative deltas. If there are no set-backs then this returns the same value as Date.now().
This might be a suitable solution if you're writing a game simulation loop, for example, where time() is called extremely frequently — the maximum error is the number of set-backs times the interval between calls. If your application doesn't naturally do that, you could explicitly call it on a setInterval, say (assuming that isn't hosed by the system clock), to keep your accuracy at the cost of some CPU time.
It is also possible that the clock will be set forward, which does not prevent monotonicity but might have equally undesirable effects (e.g. a game spending too long trying to catch up its simulation at once). However, this is not especially distinguishable from the machine having been asleep for some time. If such a protection is desired, it just means changing the condition next to the existing one, with a constant threshold for acceptable progress:
if (t > seen + leapForwardMaximum) {
offset += (seen - t) + leapForwardMaximum;
}
I would suggest that leapForwardMaximum should be set to more than 1000 ms because, for example, Chrome (if I recall correctly) throttles timers in background tabs to fire not more than once per second.
Javascript itself does not have any functionality to access the nanoTime. You might load a java-applet to aqcuire that information, like benchmark.js has done. Maybe #mathias can shed some light on what they did there…
Firefox provides "delay" argument for setTimeout...
this is the one of ways to implement monotonically increased time counter.
var time = 0;
setTimeout(function x(actualLateness) {
setTimeout(x, 0);
time += actualLateness;
}, 0);

How do you performance test JavaScript code?

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

Categories