toFixed() behavior on performance.now() - javascript

When using toFixed on performance.now, it reveals some digits which I assume normally rounded. But it seems, the range changes based on platform.
On chrome(v87.0.4280.66), it can get up to 35 digits
window.performance.now().toFixed(35); // "241989.00499999945168383419513702392578125"
On node.js(v15.2.1), it's only up to 28.
performance.now().toFixed(28) // "1092.9840000011026859283447265625"
Same behavior exist on performance.timeOrigin too. I assume that it is possible to make much more accurate measurements with performance.now() but that accuracy depends on hardware and software factors so they just keep that accuracy on minimum standart.
Does using toFixed(100) on performance.now() makes it more accurate?
What factors affect range of performance.now()?
Can we safely say that (performance.timeOrigin + performance.now()).toFixed(12) let's us measure the time accurate up to almost femtoseconds(10⁻¹⁵) or at least much much more accurate than the Date.now()?

Node's performance.now is literally:
function now() {
const hr = process.hrtime();
return hr[0] * 1000 + hr[1] / 1e6;
}
Its accuracy is nanoseconds. (So not 26 digits after the point that's meaningless in that API). This just calls uv_hrtime (see node_process_methods.cc) which does clock_gettime which is just a standard way to get nanosecond time.
In browsers the situation is worse - because of timing attacks that do fingerprinting or cache value extraction performance.now is less accurate:
To offer protection against timing attacks and fingerprinting, the precision of performance.now() might get rounded depending on browser settings.
So you can really on rely on milliseconds value.
What it returns is clamped in chrome. See time_clamper.cc for more info. Basically its percision is limited to:
static constexpr double kResolutionSeconds = 5e-6;
Intentionally.
As the other answer points out .toFixed just formats a number as a string and is unrelated to any of this.
Note: the fact an API is precise to X digits does not in any way indicate digits after the Xth are zero. It only means that you can only rely on accuracy up to that digit.

I think saying that toFixed() makes it more accurate is wrong.
Basically toFixed() only formats the input number.
As for the performant.now() it returns milliseconds value which represents the time elapsed since the time origin.
The time origin is a standard time which is considered to be the beginning of the current document's lifetime.
So from your question you could imagine that this value would be calculate differently across different platforms.
From mozilla.org:
If the current Document is the first one loaded in the Window, the time origin is the time at which the browser context was created.
If during the process of unloading the previous document which was loaded in the window, a confirmation dialog was displayed to let the user confirm whether or not to leave the previous page, the time origin is the time at which the user confirmed that navigating to the new page was acceptable.
If neither of the above determines the time origin, then the time origin is the time at which the navigation responsible for creating the window's current Document took place.

Related

Date.now() with microsecond precision [duplicate]

Are there any timing functions in JavaScript with microsecond resolution?
I am aware of timer.js for Chrome, and am hoping there will be a solution for other friendly browsers, like Firefox, Safari, Opera, Epiphany, Konqueror, etc. I'm not interested in supporting any IE, but answers including IE are welcome.
(Given the poor accuracy of millisecond timing in JS, I'm not holding my breath on this one!)
Update: timer.js advertises microsecond resolution, but it simply multiplies the millisecond reading by 1,000. Verified by testing and code inspection. Disappointed. :[
As alluded to in Mark Rejhon's answer, there is an API available in modern browsers that exposes sub-millisecond resolution timing data to script: the W3C High Resolution Timer, aka window.performance.now().
now() is better than the traditional Date.getTime() in two important ways:
now() is a double with submillisecond resolution that represents the number of milliseconds since the start of the page's navigation. It returns the number of microseconds in the fractional (e.g. a value of 1000.123 is 1 second and 123 microseconds).
now() is monotonically increasing. This is important as Date.getTime() can possibly jump forward or even backward on subsequent calls. Notably, if the OS's system time is updated (e.g. atomic clock synchronization), Date.getTime() is also updated. now() is guaranteed to always be monotonically increasing, so it is not affected by the OS's system time -- it will always be wall-clock time (assuming your wall clock is not atomic...).
now() can be used in almost every place that new Date.getTime(), + new Date and Date.now() are. The exception is that Date and now() times don't mix, as Date is based on unix-epoch (the number of milliseconds since 1970), while now() is the number of milliseconds since your page navigation started (so it will be much smaller than Date).
now() is supported in Chrome stable, Firefox 15+, and IE10. There are also several polyfills available.
Note: When using Web Workers, the window variable isn't available, but you can still use performance.now().
There's now a new method of measuring microseconds in javascript:
http://gent.ilcore.com/2012/06/better-timer-for-javascript.html
However, in the past, I found a crude method of getting 0.1 millisecond precision in JavaScript out of a millisecond timer. Impossible? Nope. Keep reading:
I'm doing some high-precisio experiments that requires self-checked timer accuracies, and found I was able to reliably get 0.1 millisecond precision with certain browsers on certain systems.
I have found that in modern GPU-accelerated web browsers on fast systems (e.g. i7 quad core, where several cores are idle, only browser window) -- I can now trust the timers to be millisecond-accurate. In fact, it's become so accurate on an idle i7 system, I've been able to reliably get the exact same millisecond, over more than 1,000 attempts. Only when I'm trying to do things like load an extra web page, or other, the millisecond accuracy degrades (And I'm able to successfully catch my own degraded accuracy by doing a before-and-after time check, to see if my processing time suddenly lengthened to 1 or more milliseconds -- this helps me invalidate results that has probably been too adversely affected by CPU fluctuations).
It's become so accurate in some GPU accelerated browsers on i7 quad-core systems (when the browser window is the only window), that I've found I wished I could access a 0.1ms precision timer in JavaScript, since the accuracy is finally now there on some high-end browsing systems to make such timer precision worthwhile for certain types of niche applications that requires high-precision, and where the applications are able to self-verify for accuracy deviations.
Obviously if you are doing several passes, you can simply run multiple passes (e.g. 10 passes) then divide by 10 to get 0.1 millisecond precision. That is a common method of getting better precision -- do multiple passes and divide the total time by number of passes.
HOWEVER...If I can only do a single benchmark pass of a specific test due to an unusually unique situation, I found out that I can get 0.1 (And sometimes 0.01ms) precision by doing this:
Initialization/Calibration:
Run a busy loop to wait until timer increments to the next millisecond (align timer to beginning of next millisecond interval) This busy loop lasts less than a millisecond.
Run another busy loop to increment a counter while waiting for timer to increment. The counter tells you how many counter increments occured in one millisecond. This busy loop lasts one full millisecond.
Repeat the above, until the numbers become ultra-stable (loading time, JIT compiler, etc). 4. NOTE: The stability of the number gives you your attainable precision on an idle system. You can calculate the variance, if you need to self-check the precision. The variances are bigger on some browsers, and smaller on other browsers. Bigger on faster systems and slower on slower systems. Consistency varies too. You can tell which browsers are more consistent/accurate than others. Slower systems and busy systems will lead to bigger variances between initialization passes. This can give you an opportunity to display a warning message if the browser is not giving you enough precision to allow 0.1ms or 0.01ms measurements. Timer skew can be a problem, but some integer millisecond timers on some systems increment quite accurately (quite right on the dot), which will result in very consistent calibration values that you can trust.
Save the final counter value (or average of the last few calibration passes)
Benchmarking one pass to sub-millisecond precision:
Run a busy loop to wait until timer increments to the next millisecond (align timer to beginning of next millisecond interval). This busy loop lasts less than a millisecond.
Execute the task you want to precisely benchmark the time.
Check the timer. This gives you the integer milliseconds.
Run a final busy loop to increment a counter while waiting for timer to increment. This busy loop lasts less than a millisecond.
Divide this counter value, by the original counter value from initialization.
Now you got the decimal part of milliseconds!!!!!!!!
WARNING: Busy loops are NOT recommended in web browsers, but fortunately, these busy loops run for less than 1 millisecond each, and are only run a very few times.
Variables such as JIT compilation and CPU fluctuations add massive inaccuracies, but if you run several initialization passes, you'll have full dynamic recompilation, and eventually the counter settles to something very accurate. Make sure that all busy loops is exactly the same function for all cases, so that differences in busy loops do not lead to differences. Make sure all lines of code are executed several times before you begin to trust the results, to allow JIT compilers to have already stabilized to a full dynamic recompilation (dynarec).
In fact, I witnessed precision approaching microseconds on certain systems, but I wouldn't trust it yet. But the 0.1 millisecond precision appears to work quite reliably, on an idle quad-core system where I'm the only browser page. I came to a scientific test case where I could only do one-off passes (due to unique variables occuring), and needed to precisely time each pass, rather than averaging multiple repeat pass, so that's why I did this.
I did several pre-passes and dummy passes (also to settle the dynarec), to verify reliability of 0.1ms precision (stayed solid for several seconds), then kept my hands off the keyboard/mouse, while the benchmark occured, then did several post-passes to verify reliability of 0.1ms precision (stayed solid again). This also verifies that things such as power state changes, or other stuff, didn't occur between the before-and-after, interfering with results. Repeat the pre-test and post-test between every single benchmark pass. Upon this, I was quite virtually certain the results in between were accurate. There is no guarantee, of course, but it goes to show that accurate <0.1ms precision is possible in some cases in a web browser.
This method is only useful in very, very niche cases. Even so, it literally won't be 100% infinitely guaranteeable, you can gain quite very trustworthy accuracy, and even scientific accuracy when combined with several layers of internal and external verifications.
Here is an example showing my high-resolution timer for node.js:
function startTimer() {
const time = process.hrtime();
return time;
}
function endTimer(time) {
function roundTo(decimalPlaces, numberToRound) {
return +(Math.round(numberToRound + `e+${decimalPlaces}`) + `e-${decimalPlaces}`);
}
const diff = process.hrtime(time);
const NS_PER_SEC = 1e9;
const result = (diff[0] * NS_PER_SEC + diff[1]); // Result in Nanoseconds
const elapsed = result * 0.0000010;
return roundTo(6, elapsed); // Result in milliseconds
}
Usage:
const start = startTimer();
console.log('test');
console.log(`Time since start: ${endTimer(start)} ms`);
Normally, you might be able to use:
console.time('Time since start');
console.log('test');
console.timeEnd('Time since start');
If you are timing sections of code that involve looping, you cannot gain access to the value of console.timeEnd() in order to add your timer results together. You can, but it get gets nasty because you have to inject the value of your iterating variable, such as i, and set a condition to detect if the loop is done.
Here is an example because it can be useful:
const num = 10;
console.time(`Time til ${num}`);
for (let i = 0; i < num; i++) {
console.log('test');
if ((i+1) === num) { console.timeEnd(`Time til ${num}`); }
console.log('...additional steps');
}
Cite: https://nodejs.org/api/process.html#process_process_hrtime_time
The answer is "no", in general. If you're using JavaScript in some server-side environment (that is, not in a browser), then all bets are off and you can try to do anything you want.
edit — this answer is old; the standards have progressed and newer facilities are available as solutions to the problem of accurate time. Even so, it should be remembered that outside the domain of a true real-time operating system, ordinary non-privileged code has limited control over its access to compute resources. Measuring performance is not the same (necessarily) as predicting performance.
editing again — For a while we had performance.now(), but at present (2022 now) browsers have degraded the accuracy of that API for security reasons.

JavaScript: what is the relationship between processor speed and code speed?

I would hope that the faster my processor is, the faster my code would run.
I can measure code to the millisecond precision using.
new Date.getTime()
What is the correlation between the two?
How can I expect this to relate to say a processor running at 3.2 GHz.
Can anyone quantify this relationship even if it is a very rough estimate?
// start_time
run some simple code to be timed.
// end_time
The [EDIT: CPU]clock time that gets allotted to a JS script is determined by a number of factors, including:
browser/version
OS
current power state
A demonstration of this can be seen in Windows 8's Advanced Power Options menu. Expand the Internet Explorer node and you'll notice that the entry below is for JavaScript timer frequency. That's exactly what you think it is -- a setting which controls how often the JS clock 'ticks'. The more ticks in a second, the more often the JS engine gets to execute code, the more code executed, the more power it takes.
So to answer your question:
Yes, in the very general sense processor clock speed can determine how fast a particular JS runs, but it would be a mistake to assume that it is a straight-forward correlation.
EDIT (more info):
I can't dig a link but I'll update here if I find it. Using setTimeout or setInterval, the smallest unit of time you can pass into those methods that will actually be honored is 100(ms). It's possible to have higher frequencies than that, but 100ms is all that is guaranteed
I found something close to what I was thinking in this article:
http://javascript.info/tutorial/settimeout-setinterval
Essentially, in JavaScript timers operated on a queued basis -- you can call setTimeout(fn, 10), and your request will be queued to execute after 10ms, but that doesn't mean that it will get executed after that amount of time, just that it's queued to do so. If you measure the difference between expected and actual (above a threshold, probably 100ms) you could gather offset data to calculate the resulting frequency (or 'clock speed') the script is running at. See this article for an example of benchmarking JS in more precise ways
From that second article, we see that the minimum timeout you can get is 4ms:
Using setTimeout for measuring graphics performance is another bad idea. The setTimeout interval is capped to 4 ms in browsers, so the most you can get out of it is 250 FPS. Historically, browsers had different minimum intervals, so you might have had a very broken trivial draw benchmark that showed browser A running at 250 FPS (4 ms min interval) and browser B running at 100 FPS (10 ms min interval). Clearly A is faster! Not! It could well be that B ran the draw code faster than A, say A took 3 ms and B took 1 ms. Doesn’t affect the FPS, as the draw time is less than the minimum setTimeout interval. And if the browser renders asynchronously, all bets are off. Don’t use setTimeout unless you know what you’re doing.

Best approach for creating a time sensitive browser app for desktop with high time precision?

I have to develop an application that will be required to measure user response time within a series of experiments. This application will be run on approximately 100 computers, thus I had mentioned that a browser based approach might be best to cut down on performing installs and updates. With that said, now that I've read about the accuracy of timers in browsers, specifically javascript timers, I'm learning there can be a delay from a few milliseconds to upwards of 10-15ms. This sort of delay would have a negative impact on the precision of these experiments when users measure their response time to visual queues on the screen.
Is there a known way around this to get precise timers for a web application being run within a browser through javascript? Likewise, some of these experiments require simple 2d animation, such as seeing how long a user can keep their mouse cursor over a rotating cube. I would prefer to use HTML5 and Javascript and we cannot use flash.
True you can't rely on a timer's interval, but you can rely on a new Date object. For instance:
var previousTime = new Date().getTime(); // returns milliseconds of "now"
var timer = setInterval( function() {
var currentTime = new Date().getTime();
var millisecondsSinceLastUpdate = currentTime - previousTime;
previousTime = currentTime;
alert( 'milliseconds since last update: ' + millisecondsSinceLastUpdate );
}, 1 );
Just as #dqhendricks suggests, you should use a new Date object. However, if you need this to be consistent and accurate you'd need this be run on the same computer with the same browser version and nothing running in the background.
Check out this demo in firefox and chrome. Click on the edge and move your cursor out as fast as you can - the lowest I can get in FF is 6MS, the lowest I can get in Chrome is 42MS.
These values can vary even wider between machines. I set up a VM with 512MB of RAM and 50% single core cap, opened up 5 tabs in FF and attempted the test - the lowest I've gotten is 52MS.
So unless you can control the users browser, PC and things running in the background I would not count on the data being accurate.
If you have new browsers you can use performance.now() which offers more precision than the Date object, but the other performance concerns of using the browser still apply :(
http://updates.html5rocks.com/2012/08/When-milliseconds-are-not-enough-performance-now

is epoch time the same on the same machine with diffent scripts/programs

I want to use shell to get epoch time
and later use javascript on a html page to get another epoch time
and then get the difference between them
but I'm afraid that the epoch time may not be synchronized among different scripts
so this difference is useless
so I want to know, if at the very same time, I use shell and javascript to get epoch tiem
will the result be the same or not?
if not, how big is the difference?
thanks!
If you mean number of seconds since Unix epoch (1970-01-01T00:00:00Z), it is governed by this very definition. The only differences you should be able to see are caused by:
different times of invocation of the system call that returns it*);
unsynchronized clocks on different systems.
and possibly also:
unsynchronized clocks on different processor cores/physical processors;
implementation dependent handling of the function that returns current time (e.g. JS engine implementation might possibly cache the value for a short time as not to have to do the actual syscall, although I would doubt this).
Depending on the time resolution you need, some of these are not a problem. My guess is, that if you don't need granularity finer than 1s, you should be more than fine (on the same machine).
*) also note, that on single core system, you can't really get the same time (at least with the ns resolution) from different syscalls, unless the kernel caches it, simply because they have to happen one after another.
According to ECMA-262 section 15.9.1.1:
Time is measured in ECMAScript in milliseconds since 01 January, 1970
UTC. In time values leap seconds are ignored. It is assumed that there
are exactly 86,400,000 milliseconds per day.
So, yes, every JavaScript implementation that adheres to standard must return exactly same values, taking care of any physical clock quirks by itself. Barring wrong set up of system clock you will have same value everywhere.
Note that definition of "epoch" and "seconds since" in other languages and systems could be different (at very least most other systems use seconds, not milliseconds and most often take leap seconds in account), so you can't guarantee that JS time, even divided by 1000 will match timestamp from another platform or OS.

High resolution date and time representation in JSON/JavaScript

Is there an accepted way to represent high resolution timestamps in JSON and/or JavaScript?
Ideally, I would like it to support at least 100 ns resolution, since that would make the server code a bit simpler (since the .NET DateTime resolution is 100 ns as well).
I found a lot of questions dealing with manipulating high-resolution timers and such (which is not possible, apparently), but I simply need to represent it somehow, in an application API.
This is for an API built in REST style using JSON, so actually measuring time in this resolution is not required. However, I would like to transfer and use (potentially in JavaScript) the timestamp in its full resolution (100 ns, since this is .NET).
In light of your clarification, isn't your timestamp just a really big integer? The current timestamp is 1329826212. If you want nanosecond precision, we are just talking about like 9 more digits: 1329826212000000000. That is a number that JavaScript can easily handle. Just send it over as:
myobject = {
"time": 1329826212000000000
}
It should be perfectly fine. I just tried doing some arithmetic operations on it, division by a large number and multiplication by the same. There is no loss of value.
JavaScript supports a huge range of floating point numbers, but guarantees integral accuracy from -2^53 to 2^53. So I think you are good to go.
UPDATE
I'm sorry, I just re-read your question. You wish to represent it? Well, one thing I can think of is to extract the last 9 digits (additional precision beyond second granularity) and show them to be the decimal part of the number. You may even wish to extract the last 6 digis (additional precision beyond the millisecond granularity).
The JavaScript Date object only has millisecond precision.
However, if you are just looking for a standard format to encode nanosecond precision times, an ISO 8601 format string will allow you to define nanoseconds as fractions of seconds:
Decimal fractions may also be added to any of the three time elements. A decimal point, either a comma or a dot (without any preference as stated most recently in resolution 10 of the 22nd General Conference CGPM in 2003), is used as a separator between the time element and its fraction. A fraction may only be added to the lowest order time element in the representation. To denote "14 hours, 30 and one half minutes", do not include a seconds figure. Represent it as "14:30,5", "1430,5", "14:30.5", or "1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties.
There is a trick used by jsperf.com and benchmarkjs.com that uses a small Java applet that exposes Java's nanosecond timer.
See Stack Overflow question Is there any way to get current time in nanoseconds using JavaScript?.

Categories