Search code examples
time-complexitycomputer-sciencecomplexity-theory

Understanding time complexity: iterative algorithm


I'm new with getting time complexities and I can't seem to understand the logic behind getting this at the end:

100 (n(n+1) / 2)

For this function:

function a() {
    int i,j,k,n;
        for(i=1; i<=n; i++) {
            for(j=1; j<=i; j++) {
                for(k=1; k<=100; k++) {
                    print("hello");
                }
            }
        }
}

Here's how I understand its algorithm:

i = 1, 2, 3, 4...n

j = 1, 2, 3, 4...(dependent to i, which can be 'n')

k = 1(100), 2(100), 3(100), 4(100)...n(100)
  = 100 [1, 2, 3, 4.....]

If I'll use this algorithm above to simulate the end equation, I'll get this result:

End Equation:

100 (n(n+1) / 2)

Simulation

   i = 1, 2, 3, 4... n
   j = 1, 2, 3, 4... n
   k = 100, 300, 600, 10000

I usually study these in youtube and get the idea of Big O, Omega & Theta but when it comes to this one, I can't figure out how they end with the equation such as what I have given. Please help and if you have some best practices, please share.

EDIT: As for my own assumption of answer, it think it should be this one:

100 ((n+n)/2) or 100 (2n / 2)

Source:

https://www.youtube.com/watch?v=FEnwM-iDb2g
At around: 15:21

Solution

  • I think you've got i and j correct, except that it's not clear why you say k = 100, 200, 300... In every loop, k runs from 1 to 100.

    So let's think through the inner loop first:

            k from 1 to 100:
                // Do something
    

    The inner loop is O(100) = O(1) because its runtime does not depend on n. Now we analyze the outer loops:

    i from 1 to n:
        j from 1 to i:
            // Do inner stuff
    

    Now lets count how many times Do inner stuff executes:

    i = 1     1 time
    i = 2     2 times
    i = 3     3 times
     ...        ...
    i = n     n times
    

    This is our classic triangular sum 1 + 2 + 3 + ... n = n(n+1) / 2. Therefore, the time complexity of the outer two loops is O(n(n+1)/2) which reduces to O(n^2).

    The time complexity of the entire thing is O(1 * n^2) = O(n^2) because nesting loops multiplies the complexities (assuming the runtime of the inner loop is independent of the variables in the outer loops). Note here that if we had not reduced at various phases, we would be left with O(100(n)(n+1)/2), which is equivalent to O(n^2) because of the properties of big-O notation.

    SOME TIPS: You asked for some best practices. Here are some "rules" that I made use of in analyzing the example you posted.

    • In time complexity analysis, we can ignore multiplication by a constant. This is why the inner loop is still O(1) even though it executes 100 times. Understanding this is the basis of time complexity. We are analyzing runtime on a large scale, not counting the number of clock cycles.
    • With nested loops where the runtime is independent of each other, just multiply the complexity. Nesting the O(1) loop inside the outer O(N^2) loops resulted in O(N^2) code.

    • Some more reduction rules: http://courses.washington.edu/css162/rnash/quarters/current/labs/bigOLab/lab9.htm

    If you can break code up into smaller pieces (in the same way we analyzed the k loop separately from the outer loops) then you can take advantage of the nesting rule to find the combined complexity.

    Note on Omega/Theta:

    Theta is the "exact bound" for time complexity whereas Big-O and Omega are upper and lower bounds respectively. Because there is no random data (like there is in a sorting algorithm), we can get an exact bound on the time complexity and the upper bound is equal to the lower bound. Therefore, it does not make any difference if we use O, Omega or Theta in this case.