I have recently started exploring Javascript in more detail, and how it executes within the browser. Specifically, the setTimeout function.
My understanding is that calling setTimeout(foo,x)
will pass a handle to foo to be executed after x milliseconds. How reliable is this timing? Obviously if another long-running script is still executing after x milliseconds then the browser won't be able to call foo, but can I be absolutely certain that setTimeout(foo,101) will always be executed after setTimeout(foo,100)?
First of all, the timeout is in miliseconds, therefor 1 sec = 1000 ms. consider that.
you can always be sure that delay of 1001 will be later than 1000.
BUT You must remember that if the 2nd methods relay on changes of the first method it doesnt mean it will work good.
the first methods can take for reasonable time of 3ms (not a complicated one) and the 2nd one can start only 1 ms after the first one causing your reliability on the first method to fail.
i would suggest not to use this feature but in some rare cases.
you can tag me in this answer comment for your specific case and i can suggest the right way to work it out.
In nodejs we may be able to time events very precisely by setting a timeout to expire shortly before the desired moment, and then creating a tight series of event-loop ticks, and after each one checking how close to the target time we've approached:
Imagine Date.now() is currently, e.g., 11000 (unrealistic; just an example!)
Determine we want to act in EXACTLY 4000ms
Note that means EXACTLY when Date.now() === 15000
Use setTimeout to wait less than 4000ms, e.g., 3800ms
Keep awaiting microticks until Date.now() >= 15000
This won't block the event loop
(But it will keep your CPU very busy)
let preciseWaitMs = async (ms, { nowFn=Date.now, stopShortMs=200, epsilonMs=0 }={}) => {
let target = nowFn() + ms;
await new Promise(resolve => setTimeout(resolve, ms - stopShortMs));
// Allow `epsilonMs` to shift our wait time by a small amount
target += epsilonMs;
// Await a huge series of microticks
// Note: this will eat cpu! Don't set `stopShortMs` too high!
while (target > nowFn()) await Promise.resolve();
};
(async () => {
let t1 = Date.now();
let target = t1 + 2000;
console.log(`Trying to act in EXACTLY 2000ms; at that point the time should be ${target}`);
await preciseWaitMs(2000);
let t2 = Date.now();
console.log(`Done waiting; time is ${t2}; deviation from target: ${t2 - target}ms (negative means early)`);
})();
Note that preciseWaitMs(4000) will never wait less than 4000ms. This means that, if anything, it is biased towards waiting too long. I added an epsilonMs option to allow the bias to be moved back according to the user; for example preciseWaitMs(4000, { epsilonMs: -1 }) may cancel out preciseWaitMs's bias towards always being late.
Note that some js environments provide a higher-precision current-time query than Date.now - for example, nodejs has require('perf_hooks').performance.now. You can supply a function like this using the nowFn option:
{ epsilonMs: -0.01, nowFn: require('perf_hooks').performance.now };
Many browsers support window.performance.now; try supplying:
{ epsilonMs: -0.01, nowFn: window.performance.now };
These settings achieve sub-millisecond "precise" timing; off by only about 0.01ms on average
Note the -0.01 value for epsilonMs is what seemed to work best for my conditions. Note that supplying fractional epsilonMs values is only meaningful if nowFn provides hi-res timestamps (which isn't, for example, the case with Date.now).
Most browsers use single thread for UI and JavaScript, which is blocked by synchronous calls. So, JavaScript execution blocks the rendering.
Events are processed asynchronously with the exception of DOM events.
but setTimeout(function(),1000) trick is very useful. It allows to:
Let the browser render current changes. Evade the “script is running too long” warning.
Change the execution flow. Opera is special in many places when it comes to timeouts and threading.
So if another function is executing it will handle it by running in parallel.
another thing to setTimeout(function(),1000) her time is in millisecond not in seconds.
Related
This code recursively calls the same function with a setTimeout of 1 millisecond, which in theory should call the function 1000 times per second. However, it's only called about 200 times per second:
This behavior happens on different machines and different browsers, I checked if it's has something to do with the maximum call stack, but this limit is actually way higher than 200 on any browser.
const info = document.querySelector("#info");
let start = performance.now();
let iterations = 0;
function run() {
if (performance.now() - start > 1000) {
info.innerText = `${iterations} function calls per second`;
start = performance.now();
iterations = 0;
}
iterations++;
setTimeout(run, 1);
}
run();
<div id="info"></div>
There’s a limitation of how often nested timers can run. The HTML5 standard says: 11: "If nesting level is greater than 5, and timeout is less than 4, then set timeout to 4."
This is true only for client-side engines (browsers). In Node.JS this limitation does not exist
HTML Standard Timers Section
Very similar example from javascript.info
The delay argument passed to setTimeout and setInterval is not a guaranteed amount of time. It's the minimum amount of time you could expect to wait before the callback function is executed. It doesn't matter how much of a delay you've asked for, if the JavaScript call stack is busy, then anything in the event queue will have to wait.
Also, there is an absolute minimum amount of time you could reasonably expect a callback to be called after which is dependent on the internals of the client.
From the HTML5 Spec:
This API does not guarantee that timers will run exactly on schedule.
Delays due to CPU load, other tasks, etc, are to be expected.
I once read somewhere that it was around 16ms so setting a delay of anything less than that shouldn't really change the timings at all.
I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)
UPD: The question What is the reason JavaScript setTimeout is so inaccurate? asks why the timers in JavaScript are inaccurate in general and all mentions of the inaccuracy are about invocations slightly after the specified delay. Here I'm asking why NodeJS tolerates also invocations even before the delay? Isn't it an error-prone design of the timers?
Just found an unexpected (to me only?) behaviour of NodeJS setTimeout(). Some times it triggers earlier than the specified delay.
function main() {
let count = 100;
while (count--) {
const start = process.hrtime();
const delay = Math.floor(Math.random() * 1000);
setTimeout(() => {
const end = process.hrtime(start);
const elapsed = (end[0] * 1000 + end[1]/1e6);
const dt = elapsed - delay;
if (dt < 0) {
console.log('triggered before delay', delay, dt);
}
}, delay);
}
}
main();
On my laptop output is:
$ node --version
$ v8.7.0
$ node test.js
triggered before delay 73 -0.156439000000006
triggered before delay 364 -0.028260999999986325
triggered before delay 408 -0.1185689999999795
triggered before delay 598 -0.19596799999999348
triggered before delay 750 -0.351709000000028
Is it a "feature" of the event loop? I always thought that it must be triggered at least after delay ms.
From the NodeJS docs:
The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering. The callback will be called as close as possible to the time specified.
As you increase the number of intervals (you have 100) the accuracy decreases, e.g., with 1000 intervals accuracy is even worse. With ten it's much better. As NodeJS has to track more intervals its accuracy will decrease.
We can posit the algorithm has a "reasonable delta" that determines final accuracy, and it does not include checking to make sure it's after the specified interval. That said, it's easy enough to find out with some digging in the source.
See also How is setTimeout implemented in node.js, which includes more details, and preliminary source investigation seems to confirm both this, and the above.
From node timers doc, about settimeout:
The only guarantee is that the timeout will not execute sooner than the declared timeout interval
I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)
(I need a process.nextTick equivalent on browser.)
I'm trying to get the most out of javascript performance so I made a simple counter ...
In a second I make continuous calls to a function that just adds one to a variable.
The code: codepen.io/rafaelcastrocouto/pen/gDFxt
I got about 250 with setTimeout and 70 with requestAnimationFrame in google chrome / win7.
I know requestAnimationFrame goes with screen refresh rate so, how can we make this faster?
PS: I'm aware of asm.js
Well, there's setImmediate() which runs the code immediately, ie as you'd expect to get with setTimeout(0).
The difference is that setTimeout(0) doesn't actually run immediately; setTimeout is "clamped" to a minimum wait time (4ms), which is why you're only getting a count of 250 in your test program. setImmediate() really does run immediately, so your counter test will be orders of magnitude higher using it.
However you may want to check browser support for setImmediate -- it's not available yet in all browsers. (you can use setTimeout(0) as a fallback of course though, but then you're back to the minimum wait time it imposes).
postMessage() is also an option, and can achieve much the same results, although it's a more complex API as it's intended for more doing a lot more than just a simple call loop. Plus there are other considerations to think of when using it (see the linked MDN article for more).
The MDN site also mentions a polyfill library for setImmediate which uses postMessage and other techniques to add setImmediate into browsers that don't support it yet.
With requestAnimationFrame(), you ought to get 60 for your test program, since that's the standard number of frames per second. If you're getting more than that, then your program is probably running for more than an exact second.
You'll never get a high figure in your count test using it, because it only fires 60 times a second (or fewer if the hardware refresh frame-rate is lower for some reason), but if your task involves an update to the display then that's all you need, so you can use requestAnimationFrame() to limit the number of times it's called, and thus free up resources for other tasks in your program.
This is why requestAnimationFrame() exists. If all you care about is getting your code to run as often as possible then don't use requestAnimationFrame(); use setTimeout or setImmediate instead. But that's not necessarily the best thing for performance, because it will eat up the processor power that the browser needs for other tasks.
Ultimately, performance isn't just about getting something to run the maximum number of times; it's about making the user experience as smooth as possible. And that often means imposing limits on your call loops.
Shortest possible delay while still being asynchronous is from MutationObserver but it is so short that if you just keep calling it, the UI will never have chance to update.
So trick would be to use MutationObserver to increment value while using requestAnimationFrame once in a while to update UI but that is not allowed.
See http://jsfiddle.net/6TZ9J/1/
var div = document.createElement("div");
var count = 0;
var cur = true;
var now = Date.now();
var observer = new MutationObserver(function () {
count++;
if (Date.now() - now > 1000) {
document.getElementById("count").textContent = count;
} else {
change();
}
});
observer.observe(div, {
attributes: true,
childList: true,
characterData: true
});
function change() {
cur = !cur;
div.setAttribute("class", cur);
}
change();
Use postMessage() as described in this blog.