Search code examples
c++boostboost-asio

boost::asio delegate type erasure


I am trying to understand boost::asio for the first time. My understanding is onto the executor you can place a handler, which is anything that has a signature like void().

How does boost::asio erase the type of the executable inside the handler, so that I can put either a std::function<void()>, lambda etc. into the queue. Does it convert all types internally to a boost::function<void()> and thus incur some heap allocation and potentially runtime vtable dispatch?

If you know where to look in the source that would be very useful please.


Solution

  • What you're asking is mostly implementation detail.

    However, the concerns you have for wondering are valid. So valid, in fact, that they're central to Asio's design choices. See e.g. https://www.boost.org/doc/libs/1_80_0/doc/html/boost_asio/overview/model/allocators.html

    In particular Asio works very hard to minimize allocations and additionally allows the caller to specify custom allocators with their handler. See https://www.boost.org/doc/libs/1_80_0/doc/html/boost_asio/overview/core/allocation.html

    Most specifically, an sometimes overlooked specification requirement on handler execution is doc:

    If an asynchronous operation requires a temporary resource (such as memory, a file descriptor, or a thread), this resource is released before calling the completion handler.

    With the rationale:

    By ensuring that resources are released before the completion handler runs, we avoid doubling the peak resource usage of the chain of operations

    Now, on the topic of the implementation details, you can check it the implementation. Note the details vary on the choice of Completion Token and other things (e.g. whether C++ exceptions are disabled), but e.g. for a simple

    boost::asio::io_context io;
    post(io, []{});
    

    You could start at initiate_post_with_executor, which will end up wrapping your operation in asio::detail::executor_op which carefully maintains the de-allocation order guarantees mentioned before (read the comments).

    OFF-TOPIC

    Not addressing your question, but perhaps your real concerns: if you're worried about performance overhead of using Asio, I'd use a profiler.

    You may find your bottle-necks are in other places. In particular, you will want to be aware of

    • the type-erasure and reference-counting in the default executor for IO objects (asio::any_io_executor). I'm personally not fond of this default, although it makes sense from a user-friendliness standpoint.
    • synchronization on execution context's operation queues

    Note that these are not usually a big concerns and many micro-optimizations apply, sometimes obviating the need to queue completion at all, or queuing to a thread local operation which reducing lock contention etc. etc. but still heeding these can improve your performance: Boost Asio experimental channel poor performance, https://chat.stackoverflow.com/transcript/230461?m=51873813#51873813.