This seems to be a rather straightforward problem (or at least as close as you can get). When I pass GL_RGB8, GL_RGBA8, or most any multiple-channel internal format, the below line produces no error.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, fontImgSize, fontImgSize, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
However, when I pass GL_R8 or any 8-bit one-channel variant, glGetError returns 0x501 (Invalid Value).
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, fontImgSize, fontImgSize, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
Any idea what's going on here? The computer I'm presently using is rather outdated and low-power, but if that were the problem I doubt RGB8 or RGBA8 would work.
For those curious, fontImgSize == 2048, the maximum texture size on my computer.
Edit: It appears GL_RG8 and 2-channel/16-bit formats also produce a 0x501
The 3- and 4-component internal formats (like GL_RGB8
and GL_RGBA8
) have been a core feature of OpenGL since a very long time (ever?), but the 1- and 2-component internal formats (like GL_R8
and GL_RG8
) are a pretty new feature, requiring rather new hardware and a corresponding driver (at least OpenGL 3, I think, not so new anymore, but if you say your labtop is outdated, then probably too new).
There are older (now deprecated) 1/2-component internal formats like GL_LUMINANCE
, GL_LUMINANCE_ALPHA
and GL_INTENSITY
(and their respective sized versions), which should be supported on any older implementation. But note that those have slightly different semantics and filtering/copying behaviour compared to the newer plain color formats. But maybe they are sufficient for your use-case.
Also note that as external formats (the format of the CPU pixel data used for reading/writing the texture data) the format GL_RED
(and maybe also GL_RG
?) should always be available (with the respective semantics), but that is totally unrelated to the internal texture data.