time-complexity# Clarifications regarding Time Complexity

I need some help with time complexity of an algorithm. What exactly is best case and worst case time complexity of an algorithm? My understanding is that the best case time complexity is the estimate of the minimum number of basic operations the system would have to execute to terminate an algorithm. On the other hand, worst case time complexity is the maximum number of basic operations the system would have to perform while executing an algorithm. Is my understanding correct ?

As an example, consider a basic C++ code for Bubble Sort :

```
int count = 0;
while(count<=n-1)
{
bool check = false;
int index = 0;
while(index<=n-1)
{
if(array[index] > array[index + 1])
{
swap(array[index],array[index + 1]);
check = true;
}
index++;
}
if(check != true)
{break;}
else{count++;}
}
```

In the above code, the worst case time complexity for an array of size n should be O(**n^2**) when the array is sorted in a descending manner which makes sense because the system would make (**n-1**) swaps for the first iteration of outer loop, (**n-2**)swaps in the second iteration of the outer loop and so on. Therefore, the total number of operations at the end of the algorithm would be (**n-1**) + (**n-2**) + .... + **1** = **n***(**n-1**)/**2**.Neglecting the constants and the lower degree terms, the worst case time complexity comes out to be O(**n^2**).
Similarly, the best case time complexity of algorithm would be Ω(n) because the system would have to pass through the elements once and it would terminate when no swaps are made, given the array is sorted already.
My question is shouldn't the best case time complexity be Ω(1) when the input size of the array is zero
(i.e when the array is empty) ? Is the input size being zero more of an exception/edge case and the best case time complexity for any other array of input size not equal to zero will be Ω(n) ?

Also, what exactly is the notation for best and worst case time complexity ? I believe we have the big - O notation for worst case and omega for best case. I have, however , come across sources that use big - O for best case time complexity as well.

I would much appreciate if someone could explain to me the concept in detail with examples. Do not mind the trivial mistakes (if any) for this is my first time on Stack Overflow. Thanks in Advance.

Solution

My question is shouldn't the best case time complexity be Ω(1) when the input size of the array is zero (i.e when the array is empty) ?

No, because you are determining the time complexity as a function of the input size, "as `n`

grows large" as we say. We want the best case for input of size `n`

, not the best case where we get to choose `n`

. We do get to choose the input, but not the size of the input.

Also, what exactly is the notation for best and worst case time complexity ? I believe we have the big - O notation for worst case and omega for best case. I have, however , come across sources that use big - O for best case time complexity as well.

This is a common misunderstanding. It is not true that Big-O *means* worst case, Big-Omega *means* best case, and Big-Theta *means* average case.

Big-O is an upper bound. We are often interested in an upper bound on the worst case so Big-O gets associated with worst case behavior, but we can also be interested in an upper bound on average case behavior, etc.

We are interested in the upper bound on the worst case, because when we are using asymptotic notation applied to running times, "higher" functions are worse so upper bounds are in a sense good. If the algorithm has an upper bound, say O(n^3) time, this is better than it having a lower bound, Ω(n^3), because a lower bound means that it could be worse, could be slower, no better than the lower bound.

- Time complexity for recursive binary search that also prints current subarray
- how to calculate binary search complexity
- Time complexity of StringBuilder reverse method
- How should I calculate the Time complexity for the below code?
- Big O notation of string permutation in Python
- Count number of digits - which method is most efficient?
- most efficient way to compute f(n)^f(n) where f(n)=n^n and n has 15-20 digits
- Big O of algorithm that steps over array recursively
- Why is O(n) better than O( nlog(n) )?
- How can the time complexity for this algo be O(N)?
- Find max product using divide and conqure in O(n) time
- Rough estimate of running time from Big O
- Calculation of time complexity of a function that finds all possible sum combinations of a given number from the list
- Find occurrences of subsequence in a binary string (non-necessarily-contiguous)
- Is complexity O(log(n)) equivalent to O(sqrt(n))?
- Leetcode 2161: Partition array according to given pivot
- Time Complexity of my program in competitive programming question City Plan
- How to calculate the time complexity of this recursive function
- Why log(n!) is O(nlog(n)) and not O(log(n!))
- How can I apply meet-in-the-middle algorithm for searching whole 2D matrix
- Given an array of houses find how many segments exists after n queries
- How to simplify my double loops by matlab vectorization?
- How do I profile a Python script?
- finding the time complexity of the program
- Longest Repeating Subarray (With Overlapping)
- Check If Initialized Slice Is Empty Without Relying On len()
- What should I understand about optimization of Algorithm?
- Complexity/decidability of the "nested maze" problem?
- Algorithms with superexponential runtime?
- Given an array, count the pairs whose sums are multiples of 60