Search code examples
openglgraphicsglteximage2d

Can OpenGL convert integer pixel data to floating point or UNORM?


glTexImage2D takes internalFormat (which specifies number of bits and data type/encoding), format (without number of bits and encoding) and type.

Is it possible, for example, to let OpenGL convert passed pixel data containing 32 bit integers from format GL_RGB_INTEGER and type GL_INT to internal format GL_RGB32F?

The wiki article https://www.khronos.org/opengl/wiki/Pixel_Transfer#Format_conversion suggests to me it's possible by stating:

Pixels specified by the user must be converted between the user-specified format (with format​ and type​) and the internal representation controlled by the image format of the image.

But I wasn't able to read from floating point sampler in shader.


Solution

  • The _INTEGER pixel transfer formats are only to be used for transferring data to integer image formats. You are filling in a floating-point texture, so that doesn't qualify. You should have gotten an OpenGL Error.

    Indeed, the very article you linked to spells this out:

    Also, if "_INTEGER" is specified but the image format is not integral, then the transfer fails.

    GL_RGBA32F is not an integral image format.

    If you remove the _INTEGER part, then the pixel transfer will "work". OpenGL will assume that the integer data are normalized values, and therefore you will get floating-point values on the range [-1, 1]. That is, if you pass a 32-bit integer value of 1, the corresponding floating-point value will be 1/(2^31-1), which is a very small number (and thus, almost certainly just 0.0).

    If you want OpenGL to cast the integer as if by a C cast (float)1... well, there's actually no way to do that. You'll just have to convert the data yourself.