Search code examples
algorithmgraphbig-ocomplexity-theorybellman-ford

Dropping non-constants in Algorithm Complexity


So, basically I'm implementing an algorithm to calculate distances from one source node to every other node in a weighted graph, and if a node is in a negative cycle, it detects and marks that node as such.

My question regards the total time complexity of my algorithm. Assume V is number of nodes and E the number of edges.

The algorithm starts by asking E lines of input to specify the Edges of the graph and inserts it in the corresponding adjacency list. Such operation is O(E)

I apply the Bellman-Ford algorithm V-1 times to know the distances and then I apply the algorithm V-1 times once again to detect the Nodes in a negative cycle. This is 2 * O(VE) = O(VE).

I print a distances vector with the size V to display the distances and/or wether the node is in a negative cycle or not. O(V).

So I guess my total complexity would be O(VE + V + E). Now my question is: Since VE is almost always bigger than V+E (for large numbers, it's always!), can I drop the V+E in the complexity and make it simply O(VE)?


Solution

  • Yes, O(VE + V + E) simplifies to O(VE) given that V and E represent the number of vertices and edges in a graph. For a highly connected graph, E = O(V^2) and so in that case VE + V + E = O(V^3) = O(VE). For a sparse graph, E = O(V) (note, this is not necessarily a tight upper bound) and so VE + V + E = O(V^2) = O(VE). In all cases O(VE) is an appropriate upper bound on the complexity.