I have a class which resembles an adjacency matrix representation of a weighted, directed graph. Let's suppose the graph has n
vertices. Here is how it works:
graph
), in the form of n
arrays each having n
integers.graph[i][j]
represents the weight of the edge going from vertex i
to vertex j
.Sample code:
class DiGraph {
public:
Graph() {}
Graph(size_t n) {
graph.resize(n);
for (size_t i = 0; i < n; i++) {
graph[i] = new int[n]{-1};
}
}
void addEdge(int from, int to, int weight) {
graph[from][to] = weight;
}
int& getEdge(int from, int to) {
return *(graph[from] + to);
}
void finalizeGraph() {
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < n; j++) {
if (graph[i][j] == -1) {
delete &graph[i][j];
}
}
}
}
private:
vector<int*> graph;
};
Now I want to implement an algorithm that is only supposed to work on the existing edges of the graph. In other words, the algorithm is not supposed to create new edges other than the ones that already exist in the input graph. I want to efficiently use memory and at the same time achieve the O(1) access to any edge between two vertices.
Is the space complexity of the above implementation of the graph effectively linear in the number of edges?
No it is not, unless you can prove that the way you allocate and deallocate the edges is independent from n
, which based on your description is unlikely.
Remember that Big O
notation considers the worst case of your algorithm. Since you first allocate n^2
slots and then remove the non-existing edges, your algorithm runtime in the worst case is O(n^2)
, not O(n)
. In other words as n
grows, your algorithm runtime also grows with O(n^2)
since you have to allocate and deallocate edges that don't exists. This is true, unless you can prove that the way you allocate and deallocate the edges is independent from n
.
In order to get to a liner runtime in terms of edges, you need a different data structure that in which the amount of work for edges is O(m)
, where m is the number of edges.
However, you can argue that if such a graph is given, i.e. the non-existing edges are removed, then the access time for each edge can be O(1)
, assuming your data structure supports it.