Search code examples
c++openglshader

Is extracting the binaries from a GLSL shader a standard, supported operation? If so, how do we build glad.c to support it?


We have been working on an OpenGL program where glad was built two summers ago, working on Linux and windows on cards such as NVIDIA 2060 under Ubuntu 20.04LTS, Intel on Windows and Ubuntu, GeForce 940mx, and others.

On Linux the driver I personally am using is nouveau on this laptop.

*-display                 
   description: VGA compatible controller
   product: HD Graphics 620
   vendor: Intel Corporation
   physical id: 2
   bus info: pci@0000:00:02.0
   version: 02
   width: 64 bits
   clock: 33MHz
   capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
   configuration: driver=i915 latency=0
   resources: irq:129 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:4000(size=64) memory:c0000-dffff

  *-display
   description: 3D controller
   product: GM108M [GeForce 940MX]
   vendor: NVIDIA Corporation
   physical id: 0
   bus info: pci@0000:01:00.0
   version: a2
   width: 64 bits
   clock: 33MHz
   capabilities: pm msi pciexpress bus_master cap_list rom
   configuration: driver=nouveau latency=0
   resources: irq:131 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:3000(size=128)

In a previous question, I asked why we were getting a segfault when trying to get the binaries from a shader program. The fragmentary answer given was that perhaps glad.c was built wrong.

This isn't, in my view an acceptable answer but perhaps I need to construct a better question. Is there any way to debug OpenGL segfaulting when extracting code from binary shader

  1. Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel, windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA with an NVIDIA driver.

  2. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist?

  3. If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do?


Solution

    1. Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel, windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA with an NVIDIA driver.

    Retrieving the binary represantation of a compiled shader program is specified in the ARB_get_program_binary OpenGL extension. This feature is also available in OpenGL since version 4.1. This means that you can use this feature if any of the following is true:

    • The GL context you're using has at least Version 4.1
    • The GL implementation you are using advertises the aviability of this feature (on this context) by including GL_ARB_get_program_binary in the GL extension string.

    Every reasonably modern GPU should support GL 4.1, so this feature should be widely available. However, some implementations may support OpenGL 4.x only in core profile. If you work with compatibility or legacy profiles, you may be out of luck.

    1. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist?

    This is one of the main points for having an extension mechanism at all. Since you used the glad GL loader, this can be done via glad quite easily. After you created the context and initialized glad, you can query the availability of this feature at runtime by

    if (GLAD_GL_VERSION_4_1 || GLAD_GL_get_program_binary) {
      // feature is available...
    }
    

    Since core OpenGL and the extension do specify exactly the same function and enum names without any extension suffix, you can just use these functions no matter if they were acquired via core OpenGL feature-set or the extension.

    Please note that it will matter how you create the context, and which version you request when you create the context. If you ask for a context below verssion 4.1, you might not get one even if the implementation techically would support that version. Typically, the extension would be available in that case anyway, but that isn't a requirement.

    1. If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do?

    The only requirements for the above code to work is that you generated the GLAD loader for at least OpenGL 4.1 and for the GL_ARB_get_program_binary extension. If you generated for 4.6 and left out the extension, then glad will never look for that extension and GLAD_GL_get_program_binary will not be defined. Then, you will miss out on the ability to use the extension if you work with GL contexts < 4.1, even if it would be supported by your GL implementation.