So I am having an exam, and a big part of this exam will be quicksort algorithm. As everyone knows, the best case scenario and actually an average case for this algorithm is: O(nlogn)
. The worst case scenario would be O(n^2)
.
As for the worst case scenario I know how to explain it: It happens when the selected pivot would be the smallest or the biggest value in the array, then we would have n
quicksort calls which may take up to n
time (I mean partition operation). Am I right?
Now the best/average case. I've read the Cormens book, I understood many things thanks to that book, but as for the quicksort algorithm he focuses on the mathematical formulas on how to explain O(nlogn)
complexity. I just wanted to know why is it O(nlogn)
, not getting into some mathematical proof. For now I've only seen some Wikipedia explanation, that if we choose a pivot which divides our array into n/2, n/2+1
parts each time, then we would have a call tree of depth logn
, but I don't know if that is true and even if so, why is it logn
then.
I know that there are many materials covering quicksort on the internet, but they only cover implementation, or are just telling me the complexity, not explaining it.
Am I right?
Yes.
we would have a call tree of depth
logn
but I don't know if that is true
It is.
why is it
logn
?
Because we partition the array in half at every step, resulting in logn
depth of the call graph. From this Intro:
See the tree and its depth, it's logn
. Imagine it as the search in a BST costs logn
, or why search takes logn
too in Binary search in a sorted array.
PS: Math tell the truth, invest in understanding them, and you shall become a better Computer Scientist! =)