Search code examples
opengltextures

Should I even pay attention to the native image format for OpenGL textures?


For example, the glTexImage2D function has the parameter internalformat, which specifies the format, the texture data will have internally. The GL driver will then convert the data I supply through the format, type and data parameters.

Many sources (even official documents from vendors) say, that the data I supply, should match the internal format. For example, page 33 in this document from Intel: https://web.archive.org/web/20200428134853/https://software.intel.com/sites/default/files/managed/49/2b/Graphics-API-Performance-Guide-2.5.pdf

When uploading textures, provide textures in a format that is the same as the internal format, to avoid implicit conversions in the graphics driver.

But I see some issues with this approach:

  • I simply do not know, what the native formats of the graphics card are. It may be RGBA with 10 bpp normalized integer or even some real exotic stuff. So the driver has to do a conversion anyway. The OpenGL specification just defines some internal formats which are required to be supported exactly by the implementation. But of course, the driver may convert the internal format to some other „native format“.
  • In most cases, I will load my texture from external sources in a format that I have not much influence. So I have two choices: Write a function that converts the image data in my own application or let the driver do the work. In my opinion, the second option is the better one. The driver will likely have highly optimized conversion algorithms implemented and will much less error prone than my own algorithm may be, because it's already very well tested.

So, is there really a need to bother about these things or is it perfectly fine to just feed OpenGL the data as it is?


Solution

  • Your biggest concern shouldn't be the time you need to upload a single image to the GPU.

    The biggest bottleneck is not the thing you'll do only once but that you'll do repetitively. If you for example exceed the limit of resident textures, then the OpenGL implementation could swap the textures out into main memory (which can then become a bottleneck).

    But if you're able to spare memory by using lower bits or compressed formats, then you'll be able to keep more textures on the GPU, which would lead to better (smoother) performance.

    And one thing you also should keep in mind is, that your application is not the only application that acquires GPU resources. For an overall smooth experience, use only what you need (bare minimum), don't be greedy.

    Sure, with gigs of GPU memory, it takes a lot to bring the GPU to its limits, but that dosn't mean that there would be no limit.


    After your comment I've reread your question and i think, that i have slightly misinterpreted it.

    I would say, that you're right in thinking that the driver has highly optimized conversion routines, where it does not make much sense to convert between compatible color formats.

    But in cases like YUV to RGB conversions, the only alternatives are to use textures for each plane (do the calculation in the shader) or convert the YUV into RGB triplets. Same goes for HSV or CMYK color formats, where a conversion on the host side is unavoidable.

    Again, within the compatible formats, let the driver do the work, convert yourself otherwise.


    As a side note:

    Depending on the OpenGL stack you're using, EGL for example lets you choose a frame buffer configuration with specific attributes (see: https://registry.khronos.org/EGL/sdk/docs/man/html/eglChooseConfig.xhtml - e.g. EGL_CONFIG_CAVEAT). Based on your chosen configuration, you should have the knowledge of your frame buffer properties (bit depth, color size, etc).