I'm an embedded software developer and as such I can't always use all the nice C++ features. One of the most difficult things is avoiding dynamic memory allocation as it is somewhat universal with all STL containers.
The std::vector
is however very useful when working with variable datasets. The problem though is that the allocation(e.g. std::reserve
) isn't done at initialization or fixed. This means that memory fragmentation can occur when a copy occurs.
It would be great to have every vector have an allocated memory space which is the max size the vector can grow to. This would create deterministic behaviour and make it possible to map the memory usage of the microcontroller at compilation time. A call to push_back
when the vector is at it's max size would create a std::bad_alloc
.
I have read that an alternative version of std::allocator
can be written to create new allocation behaviour. Would it be possible to create this kind of behaviour with std::allocator
or would an alternative solution be a better fit?
I would really like to keep using the STL libraries and amend to them instead of recreating my own vector as I'm more likely to make mistakes than their implementation.
sidenote #1:
I can't use std::array
as 1: it isn't provided by my compiler and 2: it does have a static allocation but I then still have to manage the boundary between my data and buffer inside the std::array
. This means rewriting a std::vector
with my allocation properties which is what I'm trying to get away from.
You can implement or reuse boost's static_vector; A variable-size array container with fixed capacity.
And also: LLVM's small vector without LLVM dependencies here. This creates objects at the stack until a compile-time constant is reached, then it moves to the heap.