If I disable textures it correctly draws a white cube but when I enable textures it draws nothing (or black square same color as background). The only thing I suspect is GL_INT because up to this point I have only been using unsigned byte in my projects.
Edit: my window is RGB
Here is my code:
self.texture = glGenTextures(1)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glPixelStorei(GL_PACK_ALIGNMENT, 1)
glBindTexture(GL_TEXTURE_2D, self.texture)
pix=[255,255,255,0,0,0,255,255,255,0,0,0]
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(0.0, 640, 480, 0.0, 0.0, 100.0)
glMatrixMode(GL_MODELVIEW)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 2, 2, 0, GL_RGB, GL_INT, pix)
glEnable(GL_TEXTURE_2D)
glDisable( GL_LIGHTING)
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(100, 100)
glTexCoord2i(0, 1); glVertex2i(100, 200)
glTexCoord2i(1, 1); glVertex2i(200, 200)
glTexCoord2i(1, 0); glVertex2i(200, 100)
glEnd()
glutSwapBuffers()
Well, you should not confuse the format of the image data you have in client memory with the internal format of the texture. In your case, you use 3
as internalFormat
parameter, which is a deprecated shortcut for GL_RGB
and which in practice means 8 bit per channel RGB data.
When you use GL_INT
as the client format, the GL will convert the data to the internal format. In this case, it will interpret your data as normalized integers, so INT_MAX
(roughly 2 billion and something) will be mapped to 1.0. Since you only use numbers up to 255, these will end up as zero in the internal texture format anyway, so your texture is effectively black.
I don't know what exactly you are trying to do. Using a client side format with a higher precision than the internal format will almost never me a good idea, since all the precision will be lost anyway, and the data transfer will be slower due to the additional conversion. If you need a higher precision, you should actually use some internal format with more bits per channel. However, it is totally unclear to me if you actually want that, especially since you are using the fixed function pipeline