glViewport: width/height are integers (which are pixels).
But glViewportIndexed has these values in float. What are the advantages of having them in float. My understanding is based on the fact that pixels are always integers.
It may look like the glViewport*()
calls specify pixel rectangles. But if you look at the details of the OpenGL rendering pipeline, that's not the case. They specify the parameters for the viewport transformation. This is the transformation that maps normalized device coordinates (NDC) to window coordinates.
If x
, y
, w
and h
are your specified viewport dimensions, xNdc
and yNdc
your NDC coordinates, the viewport transformation can be written like this:
xWin = x + 0.5 * (xNdc + 1.0) * w;
yWin = y + 0.5 * (yNdc + 1.0) * h;
In this calculation, xNdc
and yNdc
are of course floating point values, in their usual (-1.0, 1.0) range. I do not see any good reason why x
, y
, w
and h
should be restricted to integer values in this calculation. This transformation is applied before rasterization, so there is no need to round anything to pixel units.
Not needing integer values for the viewport dimensions could even be practically useful. Say you have a window of size 1000x1000, and you want to render 9 sub-views of equal size in the window. There's no reason for the API to stop you from doing what's most natural: Make each sub-view the size 333.3333x333.3333, and use those sizes for the parameters of glViewport()
.
If you look at glScissorIndexed()
for comparison, you will notice that it still takes integer coordinates. This makes complete sense, because gScissor()
does in fact specify a region of pixels in the window, unlike glViewport()
.