Measure render time - javascript

I've got a WebGL scene, and a set of parameters I can use to balance render quality versus speed. I'd like to display the scene to the user in as high as I can make the quality, as long as the frame rate doesn't drop below some threshold due to this. To achieve this, I have to somehow measure “current” frame rate in response to changes in quality.
But the scene is static as long as the user doesn't interact with it (like rotating camera using the mouse). I don't want to have a loop re-rendering the same scene all the time even if nothing changes. I want to stop rendering if the scene stops moving. Which means I can't simply average the time between successive frames, since I can't distinguish between the renderer being slow and the user just moving his mouse more slowly.
I thought about rendering the scene a number of times at start up, and judge the frame rate from this. But the complexity of the scene might change over time, due to the portion of the scene visible from the current camera position, or due to user interaction outside the canvas. So I have to adapt the quality as the scene changes complexity. Running a calibration loop after every mouse release would perhaps be an option.
I also thought about using the finish call instead of the flush call to accurately measure render time. But while I wait for GL to finish rendering, my application will essentially be unresponsive, in particular won't be able to queue mouse events. Since I envision the rendering to ideally take up all the time between two frames at the target threshold frame rate, that would probably be rather bad. I might get away with using finish instead of flush only on some occasions, like after a mouse release.
What's the best way to achieve a desired frame rate in a WebGL application, or to measure render time on a regular basis?

Why can't you use the average render time?
Just because you render less often, does not mean you cannot average the render times. It will just take a bit longer to get an accurate average.
Once the user starts moving their mouse you'll get an influx of data, and that should quickly give you an average render rate anyway.

Related

How to start a timer when a three.js render is visible to the user

I'm building an application for performing Psychometric Visual Search tests. For example, I'll render a bunch of 'L''s on the screen and, optionally a 'T'. The challenge is to measure the time from the moment the image has been drawn to the monitor, until a key-press from the user indicating a decision as to whether a "target" is present in the image or not.
I'm using Three.js (rather than a 2D library) as I want to evaluate search performance when the scene is rendered orthographically (traditional pyschometric 2D visual search) AND when rendered in perspective as, for example, an educational game might be. Targets/distractors will be 3D cubes, polyhedrons, or text (ie L's and T's) may 50 of them max, so not a very complicated scene at all.
I did spot a prior Q/A where the the goal was to determine when a render was finished so the image could be saved. Adapted for my needs it might look something like this..
function init() {...
mesh = new THREE.Mesh (... //build the targets and distractors
scene.add (mesh);
requestAnimationFrame(startTimerWaitForKeyDown);
render ();
}
Am I correct in my understanding that my startTimerWaitForKeyDown function will get called AFTER the initial render () of the scene has been synced to the screen, as an indication that the browser is ready to render a second frame, and that readiness to render the second frame is implicit evidence that first frame is finished and has been synced to the monitor?
I'm never rendering a second frame..just trying to start a timer at the moment the first render is visible to the test subject...(or the best proxy thereof).
Accuracy to a millisecond or two is fine...I'm just trying to avoid the 16.66ms of render/frame-sync uncertainty.
Thanks!
JD
PS. the prior Q/A is here
THREE.js static scene, detect when webgl render is complete

Coordinating dragmove to not exceed 30 FPS

I have animation that happens on dragmove.
However I don't want to waste cycles doing more calculations than I have to.
Essentially I want the dragmove events to only redraw at a reasonable animation rate.
In other words dragmove events come in as fast as they can however I only want to execute code as often as needed for smoothness for user.
So far the only solution I have come up with is to have a separate animation loop that does the redraw and ondragmove just sets the variables I need. Is there a more elegant way of doing this?
Think about it this way. The 30 FPS is your limitation. The events will go on their own time regardless your limitations.
So your idea is not that "un-elegant". I would say, it's the only way to go.
When ever you get a motion event, store it locally, if you already have it stored, override the old data(This is the ignoring part). From your 30 FPS loop, sample the motion event, if you got anything, than execute and destroy it.
This is about it. Pretty much your own words.

WebGL/OpenGL: comparing the performance

For educational purposes I need to compare the performance of WebGL with OpenGL. I have two equivalent programs written in WebGL and OpenGL, now I need to take the frame rate of them and compare them.
In Javascript I use requestAnimationFrame to animate, and I noticed that it causes the frame rate to be always at 60 FPS, and it goes down only if I switch tab or window. On the other hand if I always call the render function recursively, the window freezes for obvious reasons.
This is how I am taking the FPS:
var stats = new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.left = '450px';
stats.domElement.style.top = '750px';
document.body.appendChild( stats.domElement );
setInterval( function () {
stats.begin();
stats.end();
}, 1000 / 60 );
var render= function() {
requestAnimationFrame(render);
renderer.render(scene,camera);
}
render();
Now the problem if having always the scene at 60 FPS is that I cannot actually compare it with the frame rate of OpenGL, since OpenGL redraws the scene only when it is somehow modified (for example if I rotate the object) and glutPostRedisplay() gets called.
So I guess if there is a way in WebGL to redraw the scene only when it is necessary, for example when the object is rotated or if some attributes in the shaders are changed.
You can't compare framerates directly across GPUs in WebGL by pushing frames. Rather you need to figure out how much work you can get done within a single frame.
So, basically pick some target framerate and then keep doing more and more work until you go over your target. When you've hit your target that's how much work you can do. You can compare that to some other machine or GPU using the same technique.
Some people will suggest using glFinish to check timing. Unfortunately that doesn't actually work because it stalls the graphics pipeline and that stalling itself is not something that normally happens in a real app. It would be like timing how fast a car can go from point A to point B but instead of starting long before A and ending long after B you slam on the brakes before you get to B and measure the time when you get to B. That time includes all the time it took to slow down which is different on every GPU and different between WebGL and OpenGL and even different for each browser. You have no way of knowing how much of the time spent is time spent slowing down and how much of it was spent doing the thing you actually wanted to measure.
So instead, you need to go full speed the entire time. Just like a car you'd accelerate to top speed before you got to point A and keep going top speed until after you pass B. The same way they time cars on qualifying laps.
You don't normally stall a GPU by slamming on the breaks (glFinish) so adding the stopping time to your timing measurements is irrelevant and doesn't give you useful info. Using glFinish you'd be timing drawing + stopping. If one GPU draws in 1 second and stops in 2 and another GPU draws in 2 seconds and stops in 1, your timing will say 3 seconds for both GPUs. But if you ran them without stopping one GPU would draw 3 things a second, the other GPU would only draw 1.5 things a second. One GPU is clearly faster but using glFinish you'd never know that.
Instead you run full speed by drawing as much as possible and then measure how much you were able to get done and maintain full speed.
Here's one example:
http://webglsamples.org/lots-o-objects/lots-o-objects-draw-elements.html
It basically draws each frame. If the frame rate was 60fps it draws 10 more objects the next frame. If the frame rate was less than 60fps it draws less.
Because browser timing is not perfect you might be to choose a slightly lower target like 57fps to find how fast it can go.
On top of that, WebGL and OpenGL really just talk to the GPU and the GPU does the real work. The work done by the GPU will take the exact same amount of time regardless of if WebGL asks the GPU to do it or OpenGL. The only difference is in the overhead of setting up the GPU. That means you really don't want to draw anything heavy. Ideally you'd draw almost nothing. Make your canvas 1x1 pixel, draw a single triangle, and check the timing (as in how many single triangles can you draw one triangle at a time in WebGL vs OpenGL at 60fps).
It gets even worse though. A real app will switch shaders, switch buffers, switch textures, update attributes and uniforms often. So, what are you timing? How many times you can call gl.drawBuffers at 60fps? How many times you can call gl.enable or gl.vertexAttribPointer or gl.uniform4fv at 60fps? Some combination? What's a reasonable combination? 10% calls to gl.verterAttribPointer + 5% calls to gl.bindBuffer + 10% calls to gl.uniform. The timing of those calls are the only things different between WebGL and OpenGL since ultimately they're talking to the same GPU and that GPU will run the same speed regardless.
You actually do not want to use framerate to compare these things because as you just mentioned you are artificially capped to 60 FPS due to VSYNC.
The number of frames presented will be capped by the swap buffer operation when VSYNC is employed and you want to factor that mess out of your performance measurement. What you should do is start a timer at the beginning of your frame, then at the end of the frame (just prior to your buffer swap) issue glFinish (...) and end the timer. Compare the number of milliseconds to draw (or whatever resolution your timer measures) instead of the number of frames drawn.
The correct solution is to use the ANGLE_timer_query extension when available.
Quoting from the specification:
OpenGL implementations have historically provided little to no useful
timing information. Applications can get some idea of timing by
reading timers on the CPU, but these timers are not synchronized with
the graphics rendering pipeline. Reading a CPU timer does not
guarantee the completion of a potentially large amount of graphics
work accumulated before the timer is read, and will thus produce
wildly inaccurate results. glFinish() can be used to determine when
previous rendering commands have been completed, but will idle the
graphics pipeline and adversely affect application performance.
This extension provides a query mechanism that can be used to
determine the amount of time it takes to fully complete a set of GL
commands, and without stalling the rendering pipeline. It uses the
query object mechanisms first introduced in the occlusion query
extension, which allow time intervals to be polled asynchronously by
the application.
(emphasis mine)

Html canvas 1600x1200 screen tearing

I've seen a couple of questions asking about this, but they're all over three years old and usually end by saying theres not much of a way around it yet, so im wondering if anything's changed.
I'm currently working on a game that draws onto a canvas using an interval that happens 60 times a second. It works great on my iphone and PC, which has a faily decent graphics card, but I'm now trying it on a Thinkcentre with intel i3 graphics, and I notice some huge screen tearing:
http://s21.postimg.org/h6c42hic7/tear.jpg - it's a little harder to notice as a still.
I was just wondering if there's any way to reduce that, or to easily enable vertical sync. If there isnt, is there somethingthat I could do in my windows 8 app port of the game?
Are you using requestAnimationFrame (RAF)? RAF will v-sync but setTimeout/setInterval will not.
http://msdn.microsoft.com/library/windows/apps/hh920765
Also, since 30fps is adequate for your users to see smooth motion, how about splitting your 60fps into 2 alternating parts:
"calculate/update" during one frame (no drawing)
and then do all the drawing in the next frame.
And, get to know Chrome's Timeline tool. This great little tool lets you analyze your code to discover where your code is taking the most time. Then refactor that part of your code for high performance.
[ Addition: More useful details about requestAnimationFrame ]
Canvas does not paint directly to the display screen. Instead, canvas "renders" to a temporary offscreen buffer. “Rendering” means the process of executing canvas commands to draw on the offscreen buffer. This offscreen buffer will be quickly drawn to the actual display screen when the next screen refresh occurs.
Tearing occurs when the offscreen rendering process is only partially complete when the offscreen buffer is drawn on the actual display screen during refresh.
setInterval does not attempt to coordinate rendering with screen refresh. So, using setInterval to control animation frames will occasionally produce tearing .
requestAnimationFrame (RAF) attempts to fix tearing by generating frames only between screen refreshes (a process called vertical synching). The typical display refreshes about 60 times per second (that’s every 16 milliseconds).
With requestAnimationFrame (RAF):
If the current frame is not fully rendered before the next refresh,
RAF will delay the painting of the current frame until the next screen refresh.
This delay reduces tearing.
So for you, RAF will likely help your tearing problem, but it also introduces another problem.
You must decide how to handle your physics processing:
Keep it in a separate process—like setInterval.
Move it into requestAnimationFrame.
Move it into web-workers (the work is done on a background thread separate from the UI thread).
Keep physics in a separate setInterval.
This is a bit like riding 2 trains with 1 leg on each—very difficult! You must be sure that all aspects of the physics are always in a valid state because you never know when RAF will read the physics to do rendering. You will probably have to create a “buffer” of your physics variables so they always are in a valid state.
Move physics into RAF:
If you can both calculate physics and render within the 16ms between refreshes, this solution is ideal. If not, your frame may be delayed until the next refresh cycle. This results in 30fps which is not terrible since the eye still perceives lucid motion at 30fps. Worst case is that the delay sometimes occurs and sometimes not—then your animation may appear jerky. So the key here is to spread the calculations as evenly as possible between refresh cycles.
Move physics into web workers
Javascript is single-threaded. Both the UI and calculations must run on this single thread. But you can use web workers which run physics on a separate thread. This frees up the UI thread to concentrate on rendering and painting. But you must coordinate the background physics with the foreground UI.
Good luck with your game :)

What is the most processing efficient way to store mouse movement data in JavaScript?

I'm trying to record exactly where the mouse moves on a web page (to the pixel). I have the following code, but there are gaps in the resulting data.
var mouse = new Array();
$("html").mousemove(function(e){
mouse.push(e.pageX + "," + e.pageY);
});
But, when I look at the data that is recorded, this is an example of what I see.
76,2 //start x,y
78,5 //moved right two pixels, down three pixels
78,8 //moved down three pixels
This would preferably look more like:
76,2 //start x,y
77,3 //missing
78,4 //missing
78,5 //moved right two pixels, down three pixels
78,6 //missing
78,7 //missing
78,8 //moved down three pixels
Is there a better way to store pixel by pixel mouse movement data? Are my goals too unrealistic for a web page?
You can only save that information as fast as it's given to you. The mousemove events fires at a rate that is determined by the browser, usually topping at 60hz. Since you can certainly move your pointer faster than 60px/second, you won't be able to fill in the blanks unless you do some kind of interpolation.
That sounds like a good idea to me, imagine the hassle (and performance drag) of having 1920 mousemove events firing at once when you jump the mouse to the other side of the screen - and I don't even think the mouse itself polls fast enough, gaming mice don't go further than 1000hz.
See a live framerate test here: http://jsbin.com/ucevar/
On the interpolation, see this question that implements the Bresenham's line algorithm which you can use to find the missing points. This is a hard problem, the PenUltimate app for the iPad implements some amazing interpolation that makes line drawings look completely natural and fluid, but there is nothing about it on the web.
As for storing the data, just push an array of [x,y] instead of a string. A slow event handler function will also slow down the refresh rate, since events will be dropped when left behind.
The mouse doesn't exist at every pixel when you move it. During the update cycle, it actually jumps from point to point in a smooth manner, so to the eye it looks like it hits every point in between, when in fact it just skips around willy-nilly.
I'd recommend just storing the points where the mouse move event was registered. Each interval between two points creates a line, which can be used for whatever it is you need it for.
And, as far as processing efficiency goes...
Processing efficiency is going to depend on a number of factors. What browser is being used, how much memory the computer has, how well the code is optimized for the data-structure, etc.
Rather than prematurely optimize, write the program and then benchmark the slow parts to find out where your bottlenecks are.
I'd probably create a custom Point object with a bunch of functions on the prototype and see how it performs
if that bogs down too much, I'd switch to using object literals with x and y set.
If that bogs down, I'd switch to using two arrays, one for x and one for y and make sure to always set x and y values together.
If that bogs down, I'd try something new.
goto 4
Is there a better way to store pixel by pixel mouse movement data?
What are your criteria for "better"?
Are my goals too unrealistic for a web page?
If your goal is to store a new point each time the cursor enters a new pixel, yes. Also note that browser pixels don't necessarily map 1:1 to screen pixels, especially in the case of CRT monitors where they almost certainly don't.

Categories