For example, the glTexImage2D
function has the parameter internalformat
, which specifies the format, the texture data will have internally. The GL driver will then convert the data I supply through the format
, type
and data
parameters.
Many sources (even official documents from vendors) say, that the data I supply, should match the internal format. For example, page 33 in this document from Intel: https://web.archive.org/web/20200428134853/https://software.intel.com/sites/default/files/managed/49/2b/Graphics-API-Performance-Guide-2.5.pdf
When uploading textures, provide textures in a format that is the same as the internal format, to avoid implicit conversions in the graphics driver.
But I see some issues with this approach:
So, is there really a need to bother about these things or is it perfectly fine to just feed OpenGL the data as it is?
Your biggest concern shouldn't be the time you need to upload a single image to the GPU.
The biggest bottleneck is not the thing you'll do only once but that you'll do repetitively. If you for example exceed the limit of resident textures, then the OpenGL implementation could swap the textures out into main memory (which can then become a bottleneck).
But if you're able to spare memory by using lower bits or compressed formats, then you'll be able to keep more textures on the GPU, which would lead to better (smoother) performance.
And one thing you also should keep in mind is, that your application is not the only application that acquires GPU resources. For an overall smooth experience, use only what you need (bare minimum), don't be greedy.
Sure, with gigs of GPU memory, it takes a lot to bring the GPU to its limits, but that dosn't mean that there would be no limit.
After your comment I've reread your question and i think, that i have slightly misinterpreted it.
I would say, that you're right in thinking that the driver has highly optimized conversion routines, where it does not make much sense to convert between compatible color formats.
But in cases like YUV to RGB conversions, the only alternatives are to use textures for each plane (do the calculation in the shader) or convert the YUV into RGB triplets. Same goes for HSV or CMYK color formats, where a conversion on the host side is unavoidable.
Again, within the compatible formats, let the driver do the work, convert yourself otherwise.
As a side note:
Depending on the OpenGL stack you're using, EGL for example lets you choose a frame buffer configuration with specific attributes (see: https://registry.khronos.org/EGL/sdk/docs/man/html/eglChooseConfig.xhtml - e.g. EGL_CONFIG_CAVEAT). Based on your chosen configuration, you should have the knowledge of your frame buffer properties (bit depth, color size, etc).