I am working on a 2D graphic application with OpenGL(like QGIS). Recently when I was testing some benchmarks, there was a weird performance difference between my 2 Graphic Cards. So I made a simple test and draw just 1 million squares using VBO. So there are 4m vertices each 20 bytes, So my total VBO size is 80 MB. And I draw whole things with just one DrawElements call. When I measured render time in my laptop which has 2 Graphic Cards it runs about 43 ms on Geforce and about 1 ms on Integrated Intel card. But I expected to be faster on Geforce. Why is it so? Should I disable some Opengl options?
My System specification is: ASUS N53m With Integrated Graphics Card and Geforce GT 610m
EDIT:
I also tested on another system with AMD Radeon HD 5450, it was about 44 ms again. I also used single precision instead and it reduced to 30 ms. But still integrated GPU is more faster!
It is definitely not measuring issue, because I can see the lag when zoom in/out.
The run time behavior of different OpenGL implementations vastly differs as I found out in my experiments regarding low-latency rendering techniques for VR. In general the only truly reliable timing interval to measure, that gives consistent results is inter-frame time between the very same step in your drawing. I.e. measure the time from buffer swap to buffer swap (if you want to measure raw drawing performance, disable V-Sync), or between the same glClear
calls.
Everything else is only consistent within a certain implementation, but not between vendors (at the time of testing this I had no AMD GPU around, so I lack data on this). A few notable corner cases I discovered:
SwapBuffers
glFinish
I yet have to test what the Intel driver does if bypassing X11 (using KMS). Note that the OpenGL specification leaves it up to the implementation how and when it does certain things, as long as the outcome is consistent and conforms to the specification. And all the observed behavior is perfectly conformant.