I have an application that uses OpenGL ES 2.0, and uses a GLSurfaceView and a Renderer class to draw to the screen. Originally, I set the rendermode to RENDER_WHEN_DIRTY
, and then called requestRender()
60 times per second, timing how long it took to complete the function, but I was consistently getting incredibly short frametimes (high framerates), when the program was CLEARLY lagging on my phone. I then assumed that this was because requestRender()
only posts a render request, rather than actually calling onDrawFrame()
, and so timing it would be pointless.
I then decided to do the timing in the actual onDrawFrame()
function. I used SystemClock.elapsedRealtime()
at the beginning and at the end of the function, and calculated the difference, and once again, I was getting framerates of over 70, when in actual fact, my phone was rendering over 1000 vertices and was lagging tremendously.
So, how are you supposed to calculate frametime/framerate, and at which points should you start/stop timing?
Timing the call to requestRender()
is certainly meaningless. You normally call that in the main thread, and it signals the rendering thread to wake up and render a frame. It will return as soon as it has done the signalling, so you would not measure anything rendering related at all.
Measuring from start to end of onDrawFrame()
makes more sense, but it still won't give you what you're looking for. OpenGL operates asynchronously from work you do on the CPU. In most cases, when you make an OpenGL API call, it will only queue up work for later execution by the GPU. The call then returns, mostly long before the GPU has completed the work, or before the driver even submitted the work to the GPU.
So when you measure the time from start to end of onDrawFrame()
, you measure how long it takes to make all your OpenGL calls, which includes time spent in your code, and time the driver takes to handle your calls. But it does not measure how long the GPU takes to complete the work.
The best you can do, while keeping things straightforward, is to render a sufficiently large number of frames (say a few 100 to a few 1000), and simply measure the elapsed time from start to finish. Then divide the number of frames you rendered by the total elapsed time in seconds, and you have your overall average framerate.
Things get a little more difficult if you don't only want an average framerate, but you also want to observe variations. If you need that, I would start by measuring the elapsed time between the start of one call to onDrawFrame()
to the start of the next call to onDrawFrame()
. This is not a scientifically accurate way to measure the time for each frame, but it will be a whole lot better than what you tried, and should give you at least useful numbers. The reason it is not accurate is that even the start of onDrawFrame()
is not necessarily synchronized with the GPU completing frames. I believe Android uses triple buffering, so there can be a couple of frames "in flight". In other words, the GPU can be 1-2 frames behind the rendering that your code is currently working on.
There is a mechanism called "timer queries" in OpenGL that allows you to more directly measure how much time the GPU spends for executing given sequences of commands. But the feature was not added to ES until version 3.1.