Search code examples
openglopengl-3

Access Violation on glDelete*


I got a strange problem here: i have a potential large (as in up to 500mb) 3d texture which is created several times per second. The size of the texture might change so reusing the old texture is not an option every time. The logical step to avoid memory consumption is to delete the texture every time it is not used anymore (using glDeleteTexture) but the program crashes with a read or write access violation pretty soon. The same thing happens on glDeleteBuffer when called on the buffer i use to update the texture.

In my eyes this can't happen as the glDelete* functions are pretty failsafe. If you give them a gl handle which is not a corresponding object they just don't do anything.

The interesting thing is that if i just don't delete the textures and buffers the program runs fine until it eventually runs out of memory on the graphics card.

This is running on Windows XP 32bit, NVIDIA Geforce 9500GT with 266.58er drivers, programming language is c++ in visual studio 2005.

Update

Apparently glDelete is not the only function affected. I just got violations in several other methods (which wasn't the case yesterday) ... looks like something is damn broken here.

Update 2

this shouldn't fail should it?

template <> inline
Texture<GL_TEXTURE_3D>::Texture(
    GLint internalFormat,
    glm::ivec3 size,
    GLint border ) : Wrapper<detail::gl_texture>()
{
    glGenTextures(1,&object.t);

    std::vector<GLbyte> tmp(glm::compMul(size)*4);
    glTextureImage3DEXT(
        object,             // texture
        GL_TEXTURE_3D,          // target
        0,                      // level
        internalFormat,         // internal format
        size.x, size.y, size.z, // size
        border,                 // border
        GL_RGBA,                // format
        GL_BYTE,                // type
        &tmp[0]);               // don't load anything
}

fails with:

Exception (first chance) at 0x072c35c0: 0xC0000005:  Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.

best guess: something messing up the program memory?


Solution

  • I don't know why glDelete would crash but I am fairly certain you don't need it anyway and are overcomplicating this.

    glGenTextures creates a 'name' for your texture. glTexImage3D gives OpenGL some data to attach to that name. If my understanding is correct, there is no reason to delete the name when you don't want the data anymore.

    Instead, you should simply call glTexImage3D again on the same texture name and trust that the driver will know that your old data is no longer needed. This allows you to respecify a new size each time, instead of specifying a maximum size first and then calling glTexSubImage3D, which would make actually using the data difficult since the texture would still retain its maximum size.

    Below is a silly test in python (pyglet needed) that allocates a whole bunch of textures (just to check that the GPU memory usage measurement in GPU-Z actually works) then re-allocates new data to the same texture every frame, with a random new size and some random data just to work around any optimizations that might exist if the data stays constant.

    It's (obviously) slow as hell but it definitely shows, at least on my system (Windows server 2003 x64, NVidia Quadro FX1800, drivers 259.81), that GPU memory usage does NOT go up while looping over the re-allocation of the texture.

    import pyglet
    from pyglet.gl import *
    import random
    
    def toGLArray(input):
        return (GLfloat*len(input))(*input)
    
    w, h = 800, 600
    AR = float(h)/float(w)
    window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)
    
    
    def init():
        glActiveTexture(GL_TEXTURE1)
        tst_tex = GLuint()
        some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70,  1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
        some_data = some_data * 1000*500
    
        # allocate a few useless textures just to see GPU memory load go up in GPU-Z
        for i in range(10):
            dummy_tex = GLuint()
            glGenTextures(1, dummy_tex)
            glBindTexture(GL_TEXTURE_2D, dummy_tex)
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
    
        # our real test texture
        glGenTextures(1, tst_tex)
        glBindTexture(GL_TEXTURE_2D, tst_tex)
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
    
    def world_update(dt):
        pass
    pyglet.clock.schedule_interval(world_update, 0.015)
    
    @window.event
    def on_draw():
        glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
        # randomize texture size and data
        size = random.randint(1, 1000)
        data = [random.randint(0, 100) for i in xrange(size)]
        data = data*1000*4
    
        # just to see our draw calls 'tick'
        print pyglet.clock.get_fps()
    
        # reallocate texture every frame
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))
    
    def main():
        init()
        pyglet.app.run()
    
    if __name__ == '__main__':
        main()