Search code examples
c++constructorraii

C++: Resolve Constructor Initializer List Dependencies with RAII


One particularly tricky thing to setup safely is GLX. The problem is that quite a few resources must be allocated, and deallocated in the correct order in case of errors at any point during the initialization.

Here is what I wrote in C (only the relevant parts).

int gfx_init(struct gfx *g)
{
    int vis_attriblist[] = { GLX_RGBA, GLX_DOUBLEBUFFER, None };
    XSetWindowAttributes wa;
    XVisualInfo *vis_info;
    int r = 0;

    g->xdpy = XOpenDisplay(NULL);
    if (g->xdpy == NULL) {
            r = -1;
            LOG_ERROR("Could not open X Display");
            goto xopendisplay_failed;
    }

    vis_info = glXChooseVisual(g->xdpy, DefaultScreen(g->xdpy),
                               vis_attriblist);
    if (vis_info == NULL) {
            r = -1;
            LOG_ERROR("Couldn't get an RGBA, double-buffered visual"
                      " (GLX available?)\n");
            goto glxchoosevisual_failed;
    }

    g->xcolormap = XCreateColormap(g->xdpy, DefaultRootWindow(g->xdpy),
                                   vis_info->visual, AllocNone);
    if (gfx_has_xerror(g) /* Checks if there are errors
                             by flushing Xlib's protocol buffer
                             with a custom error handler set.
                             Not included here */) {
            r = -1;
            LOG_ERROR("Failed to create colormap");
            goto xcreatecolormap_failed;
    }

    wa.colormap = g->xcolormap;
    wa.event_mask = StructureNotifyMask | VisibilityChangeMask;

    g->xwindow = XCreateWindow(g->xdpy, DefaultRootWindow(g->xdpy),
                               0,0,1280,1024, 0, vis_info->depth,
                               InputOutput, vis_info->visual, CWColormap |
                               CWEventMask, &wa);
    if (gfx_has_xerror(g)) {
            r = -1;
            LOG_ERROR("Failed to create X11 Window");
            goto xcreatewindow_failed;
    }

    g->glxctx = glXCreateContext(g->xdpy, vis_info, NULL, True);
    if (g->glxctx == NULL) {
            r = -1;
            LOG_ERROR("Failed to create GLX context");
            goto glxcreatecontext_failed;
    }

    if (glXMakeCurrent(g->xdpy, g->xwindow, g->glxctx) == False) {
            r = -1;
            LOG_ERROR("Failed to make context current");
            goto glxmakecurrent_failed;
    }

    g->xa_wmdeletewindow = XInternAtom(g->xdpy, "WM_DELETE_WINDOW", False);
    if (gfx_has_xerror(g)) {
            r = -1;
            LOG_ERROR("XInternAtom failed");
            goto xinternatom_failed;
    }

    XSetWMProtocols(g->xdpy, g->xwindow, &g->xa_wmdeletewindow, 1);
    if (gfx_has_xerror(g)) {
            r = -1;
            LOG_ERROR("XSetWMProtocols failed");
            goto xsetwmprotocols_failed;
    }

    glClearColor(1,1,1,1);
    glColor4f(0,0,0,1);
    glEnable(GL_TEXTURE_2D);
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    if (glGetError() != GL_NO_ERROR) {
            r = -1;
            LOG_ERROR("There have been GL errors");
            goto gotglerror;
    }

    XMapWindow(g->xdpy, g->xwindow);
    XFlush(g->xdpy);

    if (r < 0) {
gotglerror:
xsetwmprotocols_failed:
xinternatom_failed:
            glXMakeCurrent(g->xdpy, None, NULL);
glxmakecurrent_failed:
            glXDestroyContext(g->xdpy, g->glxctx);
glxcreatecontext_failed:
            XDestroyWindow(g->xdpy, g->xwindow);
xcreatewindow_failed:
            XFreeColormap(g->xdpy, g->xcolormap);
    }

xcreatecolormap_failed:
    /* This is a local resource which must be destroyed
       in case of success as well */
    XFree(vis_info);

    if (r < 0) {
glxchoosevisual_failed:
            XCloseDisplay(g->xdpy);
    }

xopendisplay_failed:
    return r;
}

Actually I'm quite happy with it. I think it's good C style. The only problem being that for a gfx_destroy function, the code for gfx_init's deallocation part must be sort of duplicated, but that's very straightforward.

What I would like to know is how to do this initialization in good RAII C++ style. In particular, there is a dependency between the member allocations, and an imaginary RAII class Gfx's constructor should initialize in the correct order, or throw an exception and guarantee that the initial constructed part is torn down again.

So the natural progression is to write short wrappers for the allocated types. E.g.

struct MyDisplay {
    Display *dpy;
    MyDisplay() : dpy(XOpenDisplay(NULL)) { if (!dpy) throw "XOpenDisplay()"; }
    ~MyDisplay() { XCloseDisplay(dpy); }
};
struct MyXVisualInfo {
    XVisualInfo *info;
    static int vis_attriblist[] = { GLX_RGBA, GLX_DOUBLEBUFFER, None };
    MyXVisualInfo(Display *dpy)
            : info(glXChooseVisual(dpy, DefaultScreen(dpy), vis_attriblist) {
        if (!info)
            throw "glXChooseVisual()";
   }
   ~MyXVisualInfo() {
       XFree(info);
   }
};

And the Gfx class:

class Gfx {
    MyDisplay mydpy_;
    MyXVisualInfo myinfo_;
    /* ... */
public:
    Gfx::Gfx() : mydpy_(), myinfo_(mydpy_.dpy) /* , ... */ {}
};

But at this point we have a problem: myinfo_(mydpy.dpy) actually hands an undefined value to the MyXVisualInfo constructor. Is this is a showstopper for RAII class members? (this is not true, see @MarkB's answer)

Also, if the constructor needs to allocate temporary resources, as is actually the case for myinfo_ which I would like to not store inside the class, I think there is no way to avoid entering the constructor body, and assigning to the members from there, using resources local to the constructor body. This means members will undergo an additional construction and deconstruction which is a no-go as there are sideeffects involved.

The only thing I can think of is using unique_ptr:

class Gfx {
    unique_ptr<MyDisplay> mydpy_;
    unique_ptr<MyColormap> mycolormap_;
    /* ... */
public:
    Gfx() {
        mydpy_.reset(new MyDisplay());

        MyXVisualInfo myinfo(dpy);

        mycolormap_.reset(new MyXColormap(mydpy_->dpy,
                                          DefaultRootWindow(mydpy_->dpy),
                                          myinfo->info, AllocNone));
    }
};

Now this is obviously overengineered and has become a kludge to maintain. Also introducing unique_ptr which has unneeded overhead is not nice.

Isn't there a clean approach that would do what the C version does in a clean RAII manner?


Solution

  • std::shared_ptr and std::unique_ptr can be used with a custom deleter. Example:

    std::shared_ptr<int> mem (static_cast<int*>(malloc(sizeof(int))), free);
    

    or with a lambda:

    std::shared_ptr<int> mem (new int(), [](int* foobar) {
        std::cout << "I am a deleter" << std::endl;
        delete foobar;
    });