setTimeout in javascript make function run faster - javascript

I have an application which I have to push a lot values to array, so I test the execution time:
var st = new Date().getTime();
var a = [];
for (var i = 0; i < 20971520; i++) {
a.push(i);
}
var ed = new Date().getTime();
console.info((ed - st) / 1000);
console.info(a.length);
I run the codes in Firefox Console and Chrome console directly, and it cost 37 seconds. And during the execution, even the mouse can be moved in Chrome, but there is no interactive effect.
Then I change the codes:
function push() {
var st = new Date().getTime();
var a = [];
for (var i = 0; i < 20971520; i++) {
a.push(i);
}
var ed = new Date().getTime();
console.info((ed - st) / 1000);
console.info(a.length);
}
var tr = setTimeout(push, 50);
Simplify put the codes in a function, and call it using the setTimeout, it cost 0.844 second. And during the execution, I can operation in Chrome normally.
What's going on here?
I know that the setTimeout will put the control to the browser to do the UI job, which will make the page responsive. For example, when I do some calculation during the mousemove of the page, I would put the calculation executed delayed to prevent it blocking the UI.
But why it reduce the total execution time of the same codes?

And during the execution, I can operation in Chrome normally.
Not true. The main chrome window will be just as frozen as the other case (just for a shorter while). The debug tools are a separate thread and will not slow down though.
But why it reduce the total execution time of the same codes?
It does if you run in dev tools. If you execute the code in actuality where the VM can make property optimizations the times are comparable (nearly 1 second). e.g.
var st = new Date().getTime();
var a = [];
for (var i = 0; i < 20971520; i++) {
a.push(i);
}
var ed = new Date().getTime();
console.info('normal', (ed - st) / 1000);
console.info(a.length);
function push() {
var st = new Date().getTime();
var a = [];
for (var i = 0; i < 20971520; i++) {
a.push(i);
}
var ed = new Date().getTime();
console.info('timeout', (ed - st) / 1000);
console.info(a.length);
}
var tr = setTimeout(push, 0);
http://jsfiddle.net/gu9Lg52j/ you will see the normal executes just as fast as setTimeout.
Also if you wrap the code in a function and execute in a console the time will be comparable even without a setTimeout as the VM can make optimizations between function definition and execution:
function push() {
var st = new Date().getTime();
var a = [];
for (var i = 0; i < 20971520; i++) {
a.push(i);
}
var ed = new Date().getTime();
console.info('timeout', (ed - st) / 1000);
console.info(a.length);
}
push();

Both variations of code should run with almost identical speed (the latter example might be faster but not 10 times faster).
Inside the Chrome developer tools, there is a different story. The expressions are evaluated inside a with block. This means the variables such as a and i are first searched inside another object (the __commandLineAPI). This adds an additional overhead which results in the 10 times longer execution time.

All JavaScript engines perform various optimizations. For example V8 uses 2 compilers, a simple one used by default and an optimizing one. Code not compiled by the optimizing compiler is slow, very slow.
A condition for the optimizing compiler to run is that the code must be in a (not too long) function (there are other conditions). The first code you tried in the console isn't in a function. Put your first code in a function and you'll see it performs the same than the second one, setTimeout changes nothing.
It makes zero sense to check for performances in the console when the main performance factor is the optimizing compilation. If you're targeting node, use a benchmarking framework. If you're targeting the browser, use a site like jsperf.
Now, when you have to do a really long computation in the browser (which doesn't seem to be the case here), you should consider using web workers which do the job in a background thread not impacting the UI.

setTimeout, like others noticed, doesn't speed up the array creation and does lock the browser. If you are concerned about browser lockup during the array creation, web workers (see MDN) may come to the rescue. Here is a jsFiddle demo using a web worker for your code. The worker code is within the html:
onmessage = function (e) {
var a = [], now = new Date;
for (var i=0; i<20971520; i++) {
a.push(i);
}
postMessage({timings: 'duration: '+(new Date()-now) +
'Ms, result: [' + a[0] + '...'+a[a.length-1] + ']'});
}

Related

Different performance of Javascript on different fiddles and chrome itself

As a newcomer to Javascript coming from a background in C#, I conducted a basic performance test to compare the performance of JS with that of C#. To my surprise, the same code produced different performance results across different fiddles. Can you explain why this may be the case?
var start = performance.now();
var iterations = 100000000;
for (var i = 0; i < iterations; i++)
{
var j = i * i;
}
var end = performance.now();
var time = end - start;
alert('Execution time: ' + time);
https://jsfiddle.net/sfcu2vo6/4/
https://es6console.com/
I noticed that most websites take around 3 seconds to execute the code, but on Jsfiddle it only takes around 80ms. What is the reason for this difference in performance?
Update
After writing the same code in an HTML file and executing it in Chrome, I still see the same discrepancy. Can you explain why this may be happening?
<html>
<head></head>
<body>
<script>
var start = performance.now();
var iterations = 100000000;
for (var i = 0; i < iterations; i++)
{
var j = i * i;
}
var end = performance.now();
var time = end - start;
alert('Execution time: ' + time);
</script>
</body>
</html>
I am surprised to see that it takes more than 3 seconds to execute the code on most websites, but Jsfiddle is able to run it faster. Can you explain why Jsfiddle may have faster performance in this scenario?
Update 2
I found it interesting that when I saved the code as an .htm file on my desktop and ran it, it took around 80ms like Jsfiddle. However, when I ran the same code from another .htm file, it took around 3 seconds like the other websites. I am confused by this discrepancy. Can someone please try this and confirm if they experience the same results?
Update 3
I have discovered that the reason the code was running faster on Jsfiddle is because it was wrapped in window.onload = function() {}. As a beginner, I made a mistake by not including this code in my initial tests. I am relieved to have found the reason for the difference in performance.
If you take a trace in the Chrome performance tab while executing this code, you'll see that most of the time spent is in es6console.com's code bundles, rather than your function.
I didn't dig into exactly what they are doing, but it's possibly related to the fact that es6console transpiles the code using Babel. In general it's best not to rely on fiddles for performance testing since there are several ways they can add additional overhead on top of your code.

Am I creating a ton of DOM objects with this jQuery code?

I'm making a hex dumper in JavaScript that will analyze the byte data of a file provided by the user. In order to properly display the data preview of the file, I'm escaping html characters using the methods from the top rated answer of this question.
function htmlEncode(value) { return $("<div/>").text(value).html(); }
function htmlDecode(value) { return $("<div/>").html(value).text(); }
I'm not asking for suggestions of how to best encode and decode html characters. What I am curious about is whether or not calling these functions hundreds of thousands of times in rapid succession is creating a metric butt-ton of DOM elements that will slow down the utility over time.
I've noticed that running my dumper on a small file (35 bytes), which thankfully runs almost instantaneously, takes much longer after I've run my dumper on a larger file (132,832 bytes) in the same session. The encode function is essentially run once for each byte.
I know JavaScript has a garbage collector, and these elements aren't tied to anything so I would assume they would get cleaned up after they're done being used, but I don't know the details or inner workings of the collector so I don't want to make any assumptions as to how quickly it will take care of the problem.
Theoretically it's possible that you're generating a lot of memory because you are creating numerous new elements. However, since they are not being added to the DOM, they should be cleaned up either on the next garbage collector cycle or while the stack is being popped (depends on how optimized the engine is).
But, as #juvian pointed out, you can get around this by having one dedicated element that you use for this operation. Not only will it ensure you aren't filling up your memory but it will also be faster since jQuery won't have to repeatedly process the <div/> string, create an element, generate a jQuery object, etc.
Here's my not-completely-scientifically-sound-but-definitely-good-enough-to-get-the-idea proof:
function now() {
if (typeof performance !== 'undefined') {
now = performance.now.bind(performance);
return performance.now();
} else {
now = Date.now.bind(Date);
return Date.now();
}
}
// Load the best available means of measuring the current time
now();
// Generate a whole bunch of characters
var data = [];
var totalNumberOfCharacters = 132832;
for (var i = 0; i < totalNumberOfCharacters; i++) {
data.push(String.fromCharCode((i % 26) + 65));
}
// Basic encode function
function htmlEncode(value) {
return $("<div/>").text(value).html();
}
// Cache a single <div> to improve performance
var $div = $('<div/>');
function cachedHtmlEncode(value) {
return $div.text(value).html();
}
// Encode using the unoptimized approach
var start = now();
var unoptimized = '';
for (var i = 0; i < totalNumberOfCharacters; i++) {
unoptimized += htmlEncode(data[i]);
}
var end = now();
console.log('unoptimized', end - start);
document.querySelector('pre').innerText = unoptimized;
// Encode using the optimized approach
start = now();
var optimized = '';
for (var i = 0; i < totalNumberOfCharacters; i++) {
optimized += cachedHtmlEncode(data[i]);
}
end = now();
console.log('optimized', end - start);
document.querySelector('pre').innerText = optimized;
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<pre></pre>

Why is a custom function slower than a builtin?

I'm messing around with the performances of JavaScript's push and pop functions.
I have an array called arr.
When I run this:
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.push(Math.ceil(Math.random() * 100));
arr.pop();
}
}
I get a time of 251.38515999977244 Milliseconds (I'm using performance.now() function).
But when I run a custom push and pop:
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
Array.prototype.poppy = function() {
this.splice(-1, 1);
}
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.pushy(Math.ceil(Math.random() * 100));
arr.poppy();
}
}
The time is 1896.055750000014 Milliseconds.
Can anyone explain why there's such a huge difference between these?
To those who worry about time difference. I ran this test 100 times and computed an average time. I did that 5 times to ensure there weren't any outlying times.
Because the built-in function is written is whatever language the browser was written in (probably C++) and is compiled. The custom function is written in Javascript and is interpreted.
Generally interpreted languages are much slower than compiled ones. One usually doesn't notice this with Javascript because for the most part, you only execute a couple lines of JS between human interactions (which is always the slowest part).
Running JS in a tight loop as your done here, highlights the difference.
The reason is that the built-in function was specifically designed and optimized to perform a specific function. The browser takes whatever shortcuts possible when using the built-in function that it may not be as quick to recognize in the custom function. For example, with your implementation, the function needs to get the array length every single time the function is called.
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
However, by simply using Array.prototype.push, the browser knows that the purpose is to append a value on to the array. While browsers may implement the function differently, I highly doubt any needs to compute the length of the array for every single iteration.

Javascript keydown timing

I am working on a very time-sensitive application that uses key presses for user input. As I am talking milliseconds here, I went ahead and tried a version like this:
function start() {
//stim.style.display = "block";
rt_start = new Date().getTime();
response_allowed = 1;
}
function end() {
var t = rt_end - rt_start;
//stim.style.display = "none";
log.innerHTML = t;
i++;
if (i < iterations) {
setTimeout('start();', 1000);
}
}
var rt_start;
var rt_end;
var iterations = 100;
var i = 0;
var response_allowed = 0;
var stim;
var log;
$(document).ready(function() {
document.onkeydown = function(e) {
if (response_allowed == 1) {
rt_end = new Date().getTime();
response_allowed = 0;
end();
}
};
stim = document.getElementById('stim');
log = document.getElementById('log');
start();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<div id="log"></div>
<img src="https://www.gravatar.com/avatar/cfefd93404e6b0eb3cde02b4b6df4e2b?s=128&d=identicon&r=PG&f=1" id="stim" />
And it works fine, usually sub-5ms timers (just holding down a key). But as soon as I modify the code to display the image (uncommenting the two lines), this slows down a lot to about 30ms.
Can someone point me into the direction why exactly this is the case and how to possibly avoid this additional delay?
Thanks
I would recommend using a DOMHighResTimeStamp where available (with a polyfill for browsers that don't provide it).
It's a high-resolution timestamp (designed with accurate measurement in mind) to be used (e.g.) with the Navigation Timing and Web Performance APIs (search for this in the Mozilla Developer Network, as I can't share more than two links within a single post).
The quick way to get a DOMHighResTimeStamp - much like you do with var ts = new Date().getTime(); to get a regular millisecond timestamp - is:
var ts = performance.now();
As I said above, take a look at the Web Performance API at MDN. It will be very helpful if your application is really time-sensitive.
EDIT:
About your snippet, it seems to me that if you hold a key down, you will be always limited to the resolution of the keydown event (which fires continuously, but not every milissecond). You can easily see this behavior if you press a character key down (continuously) with a text editor and check for how many times per second the character is written. This, I guess, is controlled via an OS setting.
You are also limited to the "drift" associated with setTimeout/setInterval. You see, setTimeout queues something for execution after a given delay, but it does not guarantee timely execution. It's a "best effort" scenario and, if the browser is busy doing something, it will drift significantly. Meaning: if you use a setTimeout to re-enable a response_allowed variable after 1 second, you can expect it to re-enable it after "about" (but not exactly) 1 second.

How to calculate runtime speed in standalone javascript program?

I am doing some JavaScript exercise and thinking about ways to improve my solution(algorithm) to the exercise. I am thinking of calculating runtime speed after tweaking the code so I will know how much the speed is improved. I searched and found this method and think I can do the same. Here is what I did,
var d = new Date();
var startTime = d.getTime();
var endTime;
function testTargeAlgorithm(){
....
....
}
testTargetAlgorithm();
endTime = d.getTime();
console.log(endTime-startTime);
It's a very simple algorithm so I don't expect there will be notable difference between the time. But if millisecond cannot measure the speed improvement, what else can I do?
You can use performance.now() if the engine supports it. This gives a time in milliseconds, with sub-millisecond precision, since the page loaded or app started.
performance.now() // 26742.766999999956
I know Chrome supports it, but not sure about other browsers, node.js, or other engines standalone js engines.
Or you can run your code many times in a loop, and measure the total time taken.
Run the same function again and again.
var startTime = (new Date()).getTime();
for(var i=0;i<1000;i++) {
testTargeAlgorithm()
}
var endTime = (new Date()).getTime();
console.log(endTime-startTime);
edited to reflect suggestions, thanks Marcel
I ended up using process.hrtime() to provide nanosecond precision for measuring runtime performance. Note this method only works with Node.js. In Chrome & Firefox, you can use performance.now().
Even running same algorithm/function, the returned time difference still varies (in nanosecond unit tho) presumably due to CPU usage and other unknown effects, so it is suggested to run a good number of times and calculate the average. For example:
function calAvgSpeed(timesToRun, targetAlgorithm){
var diffSum = 0;
for(var i = 1; i <= timesToRun; i++){
var startTime = process.hrtime();
targetAlgorithm();
var diff = process.hrtime(startTime);
diffSum += diff[1];
}
return Math.floor(diffSum / times);
}

Categories