Search code examples
pythonc++memory-managementdestructorpybind11

pybind11: segfault on process exit with static py::object


I am using pybind11 to create a module in C++ and then importing it into a Python program. This is running through a normal script in CPython, not an embedded interpreter.

In my module, I have a function that defines a static py::object :

void some_function() {
   static const py::object my_object = ...
}

This works fine at runtime, but I get a segfault when the process exits. If I change py::object to py::handle, it works. So it looks like we are crashing when the object destructor tries to decrement the reference count.

My belief is that my module will be unloaded (and the static object's destructor will execute) before the Python interpreter has shutdown (LIFO order), so it should be safe at that time to run this destructor. If that isn't the case, how do I make this safe (ensure my cleanup happens before Python's) other than intentionally leaking the object?


Solution

  • Two possible solutions:

    Instead of a local static object, you can define a static member of a pybind11 module or class. Then the object's lifetime is tied to the bindings, which are managed by the python interpreter and destructed correctly.

    Another way is to manually destruct the object using a pythonic atexit callback (Here's an example).

    You're right that C++ objects are guaranteed to destruct in LIFO order, but that doesn't constrain the Python interpreter. The python interpreter is shutdown using the function Py_FinalizeEx before the C/C++ unwinding happens.