time-complexity

# Clarifications regarding Time Complexity

I need some help with time complexity of an algorithm. What exactly is best case and worst case time complexity of an algorithm? My understanding is that the best case time complexity is the estimate of the minimum number of basic operations the system would have to execute to terminate an algorithm. On the other hand, worst case time complexity is the maximum number of basic operations the system would have to perform while executing an algorithm. Is my understanding correct ?

As an example, consider a basic C++ code for Bubble Sort :

``````     int count = 0;
while(count<=n-1)
{
bool check = false;
int index = 0;
while(index<=n-1)
{
if(array[index] > array[index + 1])
{
swap(array[index],array[index + 1]);
check = true;
}
index++;
}
if(check != true)
{break;}
else{count++;}
}
``````

In the above code, the worst case time complexity for an array of size n should be O(n^2) when the array is sorted in a descending manner which makes sense because the system would make (n-1) swaps for the first iteration of outer loop, (n-2)swaps in the second iteration of the outer loop and so on. Therefore, the total number of operations at the end of the algorithm would be (n-1) + (n-2) + .... + 1 = n*(n-1)/2.Neglecting the constants and the lower degree terms, the worst case time complexity comes out to be O(n^2). Similarly, the best case time complexity of algorithm would be Ω(n) because the system would have to pass through the elements once and it would terminate when no swaps are made, given the array is sorted already. My question is shouldn't the best case time complexity be Ω(1) when the input size of the array is zero (i.e when the array is empty) ? Is the input size being zero more of an exception/edge case and the best case time complexity for any other array of input size not equal to zero will be Ω(n) ?

Also, what exactly is the notation for best and worst case time complexity ? I believe we have the big - O notation for worst case and omega for best case. I have, however , come across sources that use big - O for best case time complexity as well.

I would much appreciate if someone could explain to me the concept in detail with examples. Do not mind the trivial mistakes (if any) for this is my first time on Stack Overflow. Thanks in Advance.

Solution

• My question is shouldn't the best case time complexity be Ω(1) when the input size of the array is zero (i.e when the array is empty) ?

No, because you are determining the time complexity as a function of the input size, "as `n` grows large" as we say. We want the best case for input of size `n`, not the best case where we get to choose `n`. We do get to choose the input, but not the size of the input.

Also, what exactly is the notation for best and worst case time complexity ? I believe we have the big - O notation for worst case and omega for best case. I have, however , come across sources that use big - O for best case time complexity as well.

This is a common misunderstanding. It is not true that Big-O means worst case, Big-Omega means best case, and Big-Theta means average case.

Big-O is an upper bound. We are often interested in an upper bound on the worst case so Big-O gets associated with worst case behavior, but we can also be interested in an upper bound on average case behavior, etc.

We are interested in the upper bound on the worst case, because when we are using asymptotic notation applied to running times, "higher" functions are worse so upper bounds are in a sense good. If the algorithm has an upper bound, say O(n^3) time, this is better than it having a lower bound, Ω(n^3), because a lower bound means that it could be worse, could be slower, no better than the lower bound.