How to measure the accuracy of performance.now - javascript

I'm trying to figure out what the accuracy of performance.now is in Chrome. Using the following code:
const results = []
let then = 0
for(let i = 0; i < 10000; i++) {
const now = performance.now()
if (Math.abs(now - then) > 1e-6) {
results.push(now)
then = now
}
}
console.log(results.join("\n"))
I am getting the following results:
55058.699999935925
55058.79999976605
55058.89999959618
55058.99999989197
55059.09999972209
My understanding is that these values are in seconds, which means that each measurement is roughly 100ms apart. Is my testing methodology flawed or is performance.now actually limited to 100ms resolution in Chrome? I looked online and what I found stated the accuracy to be 100μs with 100μs jitter.

These results are in milliseconds (55K seconds would mean you had your page opened for 15 Hours when this script did execute...)
As for the precision, this is now browser dependent and subject to change when better solutions against TimeBased attacks will be found, but yes, Chrome does limit the accuracy (0.1s) and add jitter (±0.1ms), Firefox does limit even further (1ms by default) and also adds jitter (though there you can set these options), Edge does like Chrome according to this comment, and it seems Safari does a 1ms clamp only...

Related

Date object in Firefox always returns milliseconds rounded to hundreds

I found this behavior by accident when I recently used console.time. In Firefox it always returns either 0ms or 100ms. This happens because the date is always rounded to hundreds of milliseconds. For example +new Date() will return 1552469978800 instead of 1552469978877. Do you know since when is this a thing or how can I possibly get exact time? Also affects setTimeout and setInterval.
This happens because the date is always rounded to hundreds of milliseconds.
I don't see that behavior in Firefox v65 on *nix, nor v48, v56, v57, or v65 on Windows.
But if it's happening in some versions or on some platforms, it may have been a response to Spectre. For the same reason, the alternative I would have pointed out to (performance.now) is less useful than it would otherwise be, because:
The timestamp is not actually high-resolution. To mitigate security threats such as Spectre, browsers currently round the results to varying degrees. (Firefox started rounding to 1 millisecond in Firefox 60.) Some browsers may also slightly randomize the timestamp. The precision may improve again in future releases; browser developers are still investigating these timing attacks and how best to mitigate them.
Finally found a final answer to this problem. The whole problem is privacy.resistFingerprinting setting enabled by default in recent versions of Firefox.
Fingerprinting protection probably makes more problems than good in this case. You are now completely unable to properly set timezone in Javascript so some web app, for example Slack will always show GMT+0 time instead of your actual time.
Also another annoying thing is that JavaScript animations (especially affecting jQuery plugins) which use setInterval or setTimeout functions are now running at 10 frames per second.
When you disable fingerprinting protection, everything works fine after you restart the browser.
Try this:
function convert( ms ) {
var seconds = ms / 1000;
var hours = parseInt( seconds / 3600 );
seconds = seconds % 3600;
var minutes = parseInt( seconds / 60 );
seconds = seconds % 60;
var pad = function(x) { return (x < 10) ? "0"+x : x; }
return pad(hours)+":"+
pad(minutes)+":"+
pad(seconds);
}
var time = 100000000;
console.log(convert(time));
That would convert milliseconds to hours : minutes : seconds format.

Canvas performance change in Chrome

I'm working on an animation library, and every once in a while I run a benchmark test to see how much of a gain or loss I get with certain features. Recently I've run into something that has me quite perplexed, perhaps someone with more knowledge can shine a light on this for me.
Performance Before:
Chrome: ~4460 sprites # 30fps
Safari: ~2817 sprites # 30fps
FireFox: ~1273 sprites # 30fps
iPhone 4S: ~450 # 30fps
Peformance Now:
Chrome: ~3000 sprites # 30fps
Safari: ~2950 sprites # 30fps
FireFox: ~1900 sprites # 30fps (before Garbage Collection becomes too distracting)
iPhone 4S: ~635 # 30fps
So you can see, Chrome took quite a hit in performance, while every other browser seems to have gotten a little better over this time frame. The biggest thing I notice, and what I'm figuring is the answer, is that the CPU usage seems to have been throttled back in Chrome (I swear before I could get up near 90%, now its maxing around 60%). The majority of the CPU is being used for the drawImage() call, and I'm not sure I can do anything to optimize that.
If its simply an issue where Chrome is now limiting my CPU usage, I'm fine with that.
Any insight would be greatly appreciated...
_s.Sprite.prototype.drawBasic = function() {
var s = this.ctx;
if(s.globalAlpha!=this._alpha) s.globalAlpha = this._alpha;
var width = this.width;
var height = this.height;
var x = this._x;
var y = this._y;
if (_s.snapToPixel) {
x = this._x + (this._x < 0 ? -1 : 0) | 0;
y = this._y + (this._y < 0 ? -1 : 0) | 0;
height = height + (height < 0 ? -1 : 0) | 0;
height = height + (height < 0 ? -1 : 0) | 0;
}
var frame = this.sequence[this.frame] || 0;
var sheetY = frame + (frame < 0 ? -1 : 0) | 0;
var sheetX = (frame - sheetY) * this.spriteSheetX || 0;
s.drawImage(this.bitmap.image, this.bitmap.frameRect.x2 * sheetX, this.bitmap.frameRect.y2 * sheetY, this.bitmap.frameRect.x2, this.bitmap.frameRect.y2, x - (width * this._scaleX) * this.anchorX, y - (height * this._scaleX) * this.anchorY, width * this._scaleX, height * this._scaleY);
this.updateFrame();
};
UPDATE
So I downloaded an old version of Chrome (25.0.1364.5), and ran my benchmark test:
Then I reran in the most current version of Chrome:
Clearly Chrome has changed. Was it on purpose? I don't know. You can see that in the old version of Chrome I've actually gained more performance over my original 4460 (+ ~400, my optimizations must have worked), but you can also see it lets me hover at 100% cpu usage. 2x cpu almost 2x object on the screen.
Update
setInterval doesn't have the issue. Only happens with requestAnimationFrame. This finally makes so much sense. requestAnimationFrame already throttles things to 60fps, what I wasn't aware of, and can't seem to find any info on is that Chrome (others?) throttle it to 30 (60/2) and then 20 (60/3) and probably 15(60/4)... this keeps it in sync with 60hz, so you never end up with 40fps that looks strange because its out of sync with your screen refresh rate.
This explains a lot. I'm really enjoying the cpu savings this provides us.
Updated
An example without any of my code... http://www.goodboydigital.com/pixijs/canvas/bunnymark/ if you run this in Chrome... you will see the point when it jumps from ~60fps straight to 30fps. You can keep adding more bunnies, pixy can handle it... Chrome is throttling the fps. This is not how Chrome use to behave.
So I figured out whats going on here. It's not that performance has changed per say, I can still get 4800 objects on the screen at 30fps. What has changed seems to be the way Chrome tries to optimize the end users experience. It actually throttles things down from 60fps to ~30fps (29.9fps according to dev tools), which causes if(fps>=30) to return false:
stage.onEnterFrame=function(fps){ // fps = the current system fps
if(fps>=30){ // add astroids until we are less than 30fps
stage.addChild(new Asteroid());
}
}
For some reason around 2800 objects, Chrome throttles down to 30fps instead of trying to go as fast as possible... So if I start the benchmark with 4800 objects, it stays at a wonderfully consistent 29.9fps.
(you can see here that its either 60fps or 29.9fps no real in-between, the only thing that changes is how often it switches)
This is the code used for stage timing...
_s.Stage.prototype.updateFPS = function() {
var then = this.ctx.then;
var now = this.ctx.now = Date.now();
var delta = now - then;
this.ctx.then = now;
this.ctx.frameRatio = 60 / (1000 / delta);
};
Hopefully this helps someone else down the road.

Detecting whether browser is fast enough for site

Feature detection is generally preferred over browser sniffing. What should I do in a case where certain browsers "support" the features I'm using but have javascript runtimes that are too slow?
I'm using the d3 library for some complicated visualizations. The visualization is very smooth in chrome / firefox, acceptable in IE9, and slow yet working in IE8. I'd like to display a banner to IE8 users telling them to upgrade and a notice banner to IE9 users that it would be faster in chrome or FF. Is it bad to do this via user agent sniffing?
Why not measuring the time that the browser takes to compute something complex, similar to what you want to do, and set a threshold time for it?
function detectBrowserSpeed(){
var i,
slowThreshold = 100; // milliseconds
startMillis = + new Date(); //The + is to 'force' casting to an integer representing EPOCH milliseconds. If + is ommited, then I get an instance of Date.
//Do something complex here:
for (i=0;i<100000;i+=0.1){
}
var elapsed = (+ new Date()) - startMillis;
if(elapsed > slowThreshold){
return 'slow';
}else{
return 'fast';
}
}

calculating FPS in Javascript less than 1 millisecond

Is it possible to measure time gaps less than 1 milliseconds that is supported in all browsers i know of only one way which is in Chrome.
The chrome method : window.performance.now()
Currently i do FPS measurements in millisecond time spaces, but if less than 1ms passes i get infinity because the two numbers are rounded to nearest millisecond so they are the same value.
Does any one know a cross browser function calculate less than 1 millisecond time gaps in javascript?
Here's how you get accurate measurements without an accurate timer, so long as what you're timing occurs often, which I'm hoping they do in your case.
Average/aggregate the imprecise measurements of the duration of your event. A snippet out of one of my projects:
var start = Date.now();
... (stuff to be timed)
var now = Date.now();
if (DEBUG.enabled) {
var profile_this_iter = now - start;
profile += (profile_this_iter - profile) * 0.02;
}
Each new value measures nudges your reading closer to it by a factor of 0.02. Obviously you'll want to tweak that a bit. This will allow you to read an average that hovers around 0.5ms if you read a duration of 1ms half the time and 0ms half the time (with a 1ms resolution timer).
This is obviously not a replacement for a proper higher resolution timer. But I use this simple algorithm to give my javascript projects a non-crappy FPS reading. You get a damping factor that you can tweak depending on if you want more accuracy or more immediate response to changes. Considering the simplicity of this one, you'd be hard pressed to find a more elegant algorithm to provide you a good representation without any improved input data. One of the ways to enhance it would be to adjust the approach factor (that 0.02 constant) based on the frequency of sampling itself (if that changes), this way a slower measured rate could be made to converge more quickly than with a fixed value.
There is actually another way to calculate the fps, which may be a way to go around this issue. It is to count the actual number of frames in a second, which should be quite accurate, I think.
var fpsStart = new Date().getTime();
var fpsCounting = 0;
var fps = 0;
start_the_first_frame();
// Loop
function update(){
do_time_consuming_stuff();
fpsCounting++;
var thisFrame = new Date().getTime();
if(thisFrame - fpsStart >= 1000){
fpsStart += 1000;
fps = fpsCounting;
fpsCounting = 0;
}
request_next_animation_frame();
}
P.S. just typed right here, not tested, may require slight changes.
I remember seeing a way like this in lwjgl tutorial...
Also as noted by #StevenLu, you can modify it to count the number of frames in 0.5 second and multiply the "fps" by two, or even shorter time (e.g. 0.25 second) so that the update of the fps value will be more frequent.
High resolution time is available in Chrome 20, but you should be aware, that time resolution in JS depends on the browser, device and circumstances. It might vary between 4ms and 1000+ms

Monotonically increasing time in JavaScript?

What’s the best way to get monotonically increasing time in JavaScript? I’m hoping for something like Java’s System.nanoTime().
Date() obviously won’t work, as it’s affected by system time changes.
In other words, what I would like is for a <= b, always:
a = myIncreasingTime.getMilliseconds();
...
// some time later, maybe seconds, maybe days
b = myIncreasingTime.getMilliseconds();
At best, even when using the UTC functions in Date(), it will return what it believes is the correct time, but if someone sets the time backward, the next call to Date() can return a lesser value. System.nanoTime() does not suffer from this limitation (at least not until the system is rebooted).
Modification: [2012-02-26: not intended to affect the original question, which has a bounty]
I am not interested knowing the “wall time”, I’m interested in knowing elapsed time with some accuracy, which Date() cannot possibly provide.
You could use window.performance.now() - since Firefox 15, and window.performance.webkitNow() - Chrome 20]
var a = window.performance.now();
//...
var delay = window.performance.now() - a;
You could wrap Date() or Date.now() so as to force it to be monotonic (but inaccurate). Sketch, untested:
var offset = 0;
var seen = 0;
function time() {
var t = Date.now();
if (t < seen) {
offset += (seen - t);
}
seen = t;
return t + offset;
}
If the system clock is set back at a given moment, then it will appear that no time has passed (and an elapsed time containing that interval will be incorrect), but you will at least not have negative deltas. If there are no set-backs then this returns the same value as Date.now().
This might be a suitable solution if you're writing a game simulation loop, for example, where time() is called extremely frequently — the maximum error is the number of set-backs times the interval between calls. If your application doesn't naturally do that, you could explicitly call it on a setInterval, say (assuming that isn't hosed by the system clock), to keep your accuracy at the cost of some CPU time.
It is also possible that the clock will be set forward, which does not prevent monotonicity but might have equally undesirable effects (e.g. a game spending too long trying to catch up its simulation at once). However, this is not especially distinguishable from the machine having been asleep for some time. If such a protection is desired, it just means changing the condition next to the existing one, with a constant threshold for acceptable progress:
if (t > seen + leapForwardMaximum) {
offset += (seen - t) + leapForwardMaximum;
}
I would suggest that leapForwardMaximum should be set to more than 1000 ms because, for example, Chrome (if I recall correctly) throttles timers in background tabs to fire not more than once per second.
Javascript itself does not have any functionality to access the nanoTime. You might load a java-applet to aqcuire that information, like benchmark.js has done. Maybe #mathias can shed some light on what they did there…
Firefox provides "delay" argument for setTimeout...
this is the one of ways to implement monotonically increased time counter.
var time = 0;
setTimeout(function x(actualLateness) {
setTimeout(x, 0);
time += actualLateness;
}, 0);

Categories