Search code examples
c++qtwinapimemorybad-alloc

"Private memory" not released after catching bad_alloc despite object being destructed


An object tries to allocate more memory then the allowed virtual address space (2Gb on win32). The std::bad_alloc is caught and the the object released. Process memory usage drops and the process is supposed to continue; however, any subsequent memory allocation fails with another std::bad_alloc. Checking the memory usage with VMMap showed that the heap memory appears to be released but it is actually marked as private, leaving no free space. The only thing to do seems to quit and restart. I would understand a fragmentation problem but why can't the process have the memory back after the release?

The object is a QList of QLists. The application is multithreaded. I could make a small reproducer but I could reproduce the problem only once, while most of the times the reproduces can use again the memory that was freed.

Is Qt doing something sneaky? Or maybe is it win32 delaying the release?


Solution

  • The answer by Martin Drab put me on the right path. Investigating about the heap allocations I found this old message that clarifies what is going on:

    The issue here is that the blocks over 512k are direct calls to VirtualAlloc, and everything else smaller than this are allocated out of the heap segments. The bad news is that the segments are never released (entirely or partially) so ones you take the entire address space with small blocks you cannot use them for other heaps or blocks over 512 K.

    The problem is not Qt-related but Windows-related; I could finally reproduce it with a plain std::vector of char arrays. The default heap allocator leaves the address space segments unaltered even after the correspondent allocation was explicitly released. The ratio is that the process might ask again buffers of a similar size and the heap manager will save time reusing existent address segments instead of compacting older ones to create new ones.

    Please note this has nothing to do with the amount of physical nor virtual memory available. It's only the address space that remains segmented, even though those segments are free. This is a serious problem on 32 bit architectures, where the address space is only 2Gb large (can be 3).

    This is why the memory was marked as "private", even after being released, and apparently not usable by the same process for average-sized mallocs even though the committed memory was very low.

    To reproduce the problem, just create a huge vector of chunks smaller than 512Kb (they must be allocated with new or malloc). After the memory is filled and then released (no matter if the limit is reached and an exception caught or the memory is just filled with no error), the process won't be able to allocate anything bigger than 512Kb. The memory is free, it's assigned to the same process ("private") but all the buckets are too small.

    But there are worse news: there is apparently no way to force a compaction of the heap segments. I tried with this and this but had no luck; there is no exact equivalent of POSIX fork() (see here and here). The only solution is to do something more low level, like creating a private heap and destroying it after the small allocations (as suggested in the message cited above) or implementing a custom allocator (there might be some commercial solution out there). Both quite infeasible for large, existent software, where the easiest solution is to close the process and restart it.