I am trying to understand how to know if nested-for-loops in an algorithm yield linear or quadratic Big-Oh complexity. Here are a few examples I came-up with, but are related to brute-force-loop-up and graph-traversing. I try to
Am I on the right path of thinking?
Example 1:
N = ? // Number of elements in an array of integers
For i = 1 to N
For j = 1 to N
Print "foo"
Example 2:
N = ? // Number of elements in an array of integers
For i = 1 to log(N)
For-each edge of log(N)
Print "foo"
Example 3:
V = ? // Total number of nodes in a graph
E = ? // Total number of edges in a graph (not of each iteration-node)
For i = 1 to V
For j = 1 to E
Print "foo"
Example 4:
V = ? // Total number of nodes in a graph
For i = 1 to V
For-each edge of V[i]
Print "foo"
Big-O notation describes how the running time of an algorithm changes with respect to the size of the input.
With example 1, if you hardcode N as 20, then there is no scaling with input. The algorithm actually becomes O(1)
.
Example 2 is similar to your intuition of example 1, except that each loop only runs log(n) iterations.
With example 3, I'd describe this as running in O(mn)
time (m = number of edges, n = number of vertices), rather than O(n^2)
.
The last example is actually slightly more nuanced than I first gave it credit (thanks, @Hurkyl!).
The edges are partitioned by the vertices, so effectively you run the inner loop a total of 2|E|
times, in addition to running the outer loop |V|
times. This leads to your algorithmic complexity of O(|V| + 2|E|)
. Constants are generally ignored in big-o notation, so this is considered the same as O(|V| + |E|)
.