Search code examples
cmemory-managementpthreadsthread-local-storage

Is it better to use multiple pthread keys or a single pthread key


I am trying to use pthread keys to automatically clean up all my thread-local memory allocated. There are several different memory allocations in my code that need to be cleaned up. I am wondering if it is better to create one pthread key whose destructor function frees every single allocation or create multiple pthread keys, each with a unique destructor function that only cleans up a specific allocation.

I think it is more common to create a pthread key per unique memory allocation, so I currently have implemented that. But I think it would be cleaner to use only a single key that destructs all thread-local memory. That way I could just add my cleanup methods to that destructor if I ever need to add more thread-local memory.


Solution

  • I am trying to use pthread keys to automatically clean up all my thread-local memory allocated.

    I take you to mean that you are registering a destructor function with your thread-specific data keys, on which you intend to rely for releasing dynamically allocated objects associated with those keys.

    I am wondering if it is better to create one pthread key whose destructor function frees every single allocation or create multiple pthread keys, each with a unique destructor function that only cleans up a specific allocation.

    Each thread has at most one object associated with each key. It is that object which the destructor for a given key should endeavor to clean up. It may be that such an object has multiple components that need to be freed, but I would account that mostly a question of how to assign data to keys -- that is, "what is the significance of the data associated with key#1, etc.. Having made that decision, the needed behavior of the destructor follows.

    I think it is more common to create a pthread key per unique memory allocation

    You might very well be surprised. The top-level TSD object for a given key may well be an allocated one (though it does not have to be) but that does not imply one key per allocation. The top-level TSD object may have independently-allocated components that need to be cleaned up appropriately. So again, how many allocations to handle with a destructor is not a particularly useful question, because the answer is "however many is appropriate". You would be best off thinking in term of what (kind of) object is associated with each key, and what the appropriate means is to clean that up.

    But I think it would be cleaner to use only a single key that destructs all thread-local memory.

    The key does not destruct anything. Its role is to identify an object. It is important to understand this, because your design will be cleanest if it is semantically aligned with the API design, including pthreads. Approaching the question of what data should be associated with a key from the perspective of how the cleanup code is structured is letting the tail wag the dog.

    In any case, you cannot ensure a one-for-all key anyway, because you do not, in general, have such detailed control. Any external library you use, or the C standard library, for that matter, may register its own TSD keys and associated destructors. You can limit your own code to a single TSD key, and that may be a reasonable design in any given case. But there's no reason at all to accept that it's a cleaner or better approach for every case.

    That way I could just add my cleanup methods to that destructor if I ever need to add more thread-local memory.

    And if two otherwise unrelated modules both require TSD? Do you couple them together through a common TSD key and destructor just to maintain the single-key design? I don't consider that particularly clean.

    A clean design is the foundation for clean code. If your design is flawed or messy then you will need messy code to implement it. What strategy to use to map objects to TSD keys is pretty far down the line of decisions to make during the design and development process.