I am trying to read data into a texture with glTexImage2D. It seems like no matter what I try, the first row of texels is read correctly, but further rows are created from memory way beyond the boundaries of the given std::vector:
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
int w = 8;
int h = 8;
std::vector<GLuint> emptyData(w*h, 0);
emptyData[0] = 0xFF<<8; //blue
emptyData[1] = 0xFF<<16; //green
emptyData[3] = 0xFF<<24; //red
emptyData[w] = 0xFF<<16; // <- should be in second row, but is not shown
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0,GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, emptyData.data());
Depending on the size of the texture the code segfaults. I have already tried using a vector of GLubytes, but that didn't work either. I have also tried glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
, but that didn't work either. What am I missing here?
Edit: Further experimenting revealed that the minimum width assumed when reading in pixels seems to be 128px. So if I create a texture of size 64*64 and want a single pixel at x = 10, y = 14, I have to write into emptyData[10 + 14*128]
and have to reserve the appropriate amount of memory beforehand. Does someone know if this is platform dependant or why this is?
Reading your observation of the minimal line width, the GL_UNPACK_ROW_LENGTH
pixel storage mode comes to mind (see OpenGL-Refpage).
It allows you to set the size of row, skipping line padding or other data between rows. However by default this should be zero. Maybe the GL_UNPACK_ROW_LENGTH
is set somewhere else in your code?