Search code examples
algorithmsortinglanguage-agnosticpseudocode

Calculating time complexity of inner loops


So I am learning how to calculate time complexities of algorithms from Introduction to Algorithms by Cormen. The example given in the book is insertion sort:

1. for i from 2 to length[A]
2.    value := A[i] 
3.    j := i-1
4.    while j > 0 and A[j] > value
5.       A[j+1] := A[j]
6.       j := j-1
7.    A[j+1] = value  

Line 1. executes n times.
Line 4., according to the book, executes enter image description here times.

So, as a general rule, is all inner loops' execution time represented by summation?


Solution

  • Anecdotally, most loops can be represented by summation. In general this isn't true, for example I could create the loop

    for(int i = n; i > 1; i = ((i mod 2 == 0) ? i / 2 : 3 * i + 1)))
    

    i.e. initialize i to n, on each iteration if i is even then divide it by two, else multiply i by three and add one, halt when i <= 1. What's the big-oh for this? Nobody knows.

    Obviously I've never seen a real-life program using such a weird loop, but I have seen while loops that either increase and decrease the counter depending on some arbitrary condition that might change from iteration to iteration.