Within a function, I have created a vector with generous amounts of space to which I push a runtime determined amount of objects(Edge
). Other objects, however, maintain pointers to the Edges
within the vector. Occasionally the entire program seg faults because a pointer becomes invalid, and I suspect that this happens when the vector reaches capacity and reallocates, thereby invalidating the memory addresses.
Is there any way around this? Or perhaps is there another solution to grouping together heap allocations?
Note: that the primary motivation for this is to minimize heap allocations, for this is what is slowing down my algorithm. Initially I had vector<Edge *>
and every element added was individually allocated. Batch allocation increased the speed dramatically, but the vector
method described here invalidates pointers.
Your code example, as requested: This is the vector I declare as a stack var:
vector<Edge> edgeListTemp(1000);
I then add to it as such, using an rvalue overload:
edgeListTemp.push_back(Edge{edge->movie, first, second});
Node
objects keep pointers to these:
first->edges.push_back(&edgeListTemp.back());
second->edges.push_back(&edgeListTemp.back());
Where edges
is declared as follows:
std::vector<Edge *> edges; /**< Adjacency list */
There are several possible solutions:
reserve
over the vector from the start; elements won't be reallocated until you reach that size;std::deque
instead. std::deque
guarantees that pointers to elements aren't invalidated as long as you only push/pop from front/back;std::list
guarantees to never invalidate references to elements, but it introduces several serious performance penalties (no O(1) addressing, one allocation for each node);std::shared_ptr
and always use it to keep references to the elements; this obviously has the disadvantage of needing one allocation for each element, which may or may not be acceptable, depending on your use case.