I made this sample code to illustrate my question:
/**
* begin end
* v v
* XXXXXXXXXXXXXXXX
* ^
* data
* [===========] size
* [==============] capacity
*/
typedef struct list_t
{
int *data;
int *begin;
int *end;
size_t size;
size_t capacity;
} list_t;
/**
* begin end
* v v
* XXXXXXXXXXXXXXXX
* ^
* old_data
*
* becomes
*
* begin end
* v v
* XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
* ^
* data
*/
void reserve_back(list_t *this) {
int *old_data = this->data;
this->capacity *= 2;
this->data = (int *) realloc(this->data, this->capacity * sizeof(int));
if (old_data != this->data) {
this->begin = this->begin - old_data + this->data;
this->end = this->end - old_data + this->data;
}
}
/**
* begin end
* v v
* XXXXXXXXXXXXXXXX
* ^
* old_data
*
* becomes
*
* begin end
* v v
* XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
* ^
* data
*/
void reserve_front(list_t *this) {
int *old_data = this->data;
this->data = (int *) malloc(this->data, this->capacity * sizeof(int) * 2);
memmove(this->data + this->capacity, old_data, this->capacity * sizeof(int));
free(old_data);
this->capacity *= 2;
this->begin = this->begin - old_data + this->data;
this->end = this->end - old_data + this->data;
}
list_t is basically a double ended dynamic array that provides push and pop operations on both end in constant time, as well as the usual random access.
When I need to increase the capacity at the back, I can use realloc to reallocate the data block, and realloc only moves the data when it has to. However, to increase the capacity at the front, I am forced to move the data every single time, which can become very heavy on large lists.
Is there any way to do this kind of reallocation in constant time when there is available memory before the already allocated data?
In a word, no. (Unless you write your own memory allocator, and if you try that you will soon see why it's not implemented in common library allocators.)
Generally speaking, the best way of implementing a double-ended queue (deque) is to keep the data in segments, rather than trying to keep a single contiguous vector. As long as the segments are of a reasonable size, this is almost as fast as a single contiguous buffer for indexed access or iteration, and faster for adding data to either end.
The one disadvantage of the segmented representation is that you are not able to pass the contents of the deque to a function which expects a simple vector. On the other hand, if you don't have to do that very often, you might be reassured by the observation that making a flattened copy of the deque is no more expensive than copy a vector in order to expand it to a larger memory allocation.