Search code examples
c++multithreadingboostboost-asio

Boost ASIO running handler cleanup & lifetimes


Boost ASIO states that when io_context object gets destroyed it will destruct any outstanding handlers and delete all copies of made arguments to said handlers.

What happens to handlers that are still being executed? An io_context may be getting executed from multiple threads and it is not clear to me what happens to these handlers and threads and at which point and how they get cleaned up.

I imagine that if a handler makes use of initiating functions async_*, then it has a clear way to exit out early, when it receives an error message, or ASIO itself terminates execution at an async_ call.

However, what happens if handler is doing blocking work? Will the destruction of the handler be delayed until blocking work completes?

This is relevant in the context of cleanup. If I have an object that has its own io_context that then schedules work on that context, is there some gurantee on the worst time it takes to cleanup every handler?


Solution

  • What happens to handlers that are still being executed?

    You can't legally destroy the context in that state. io_context is documented as thread-safe, except for construction and destruction.

    An io_context may be getting executed from multiple threads and it is not clear to me what happens to these handlers and threads and at which point and how they get cleaned up.

    You have to orchestrate the threads to exit. The usual way is by having the threads running io_context::run (or friends). As long as a thread is running a member function, the io_context cannot be destroyed.

    For a forced shutdown you can of course use the stop() member function (which is safe to use), which will cause all run/poll member functions to return at the earliest opportunity, as well as make sure that subsequent invocations return immediately without running any handlers.

    For a graceful shutdown, however, you typically want to cancel the (roots of) async operations in flight and wait for the services to run out of work naturally, at which point you can then safely destroy the service.

    However, what happens if handler is doing blocking work?

    Handlers should not typically run blocking work. When you do, you're simply (ab)using the context as any sort of thread pool and you would shut it down exactly like you would any thread pool you would have implemented yourself: you would, again, need to synchronize and cause all thread workers to stop executing new tasks and exit. Then when all workers have exited, it will be safe to destruct the thread pool.

    Exactly like with any third party thread pool, if you need to be able to interrupt long-running tasks (i.e. blocking in the context of io services) you must facilate some way to interrupt those tasks. There's the general-purpose kind with

    This is relevant in the context of cleanup. If I have an object that has its own io_context that then schedules work on that context, is there some gurantee on the worst time it takes to cleanup every handler?

    Naturally the runtime cost will be equal to the sum of all the handler destructors plus a little bit of overhead for the context's internal management structures (queues, locks, service instances). It will be easy to create a handler type which makes the correspondence non-linear (just put a sleep in a destructor), but that's obviously not a common practice.

    So, it's going to be O(h)+O(s) (where h is number of handlers pending and s is number of service instances; in real life, s is going to be insignificant unless you really abuse the design, and h is going to be close to zero in case of graceful shutdown.

    When you forcefully stop() an execution context when it had lot of work queued (or if you never started any work), you might notice some destruction cost. Of course, if you don't care about graceful shutdown, you might be abandoning the process anyway, so you might consider not bothering with resource reclamation (std::exit, std::abort, std::terminate). Of course, this will be fine with e.g. heap resources, but less so with resources like interprocess locks, temporary files, database transactions etc.