Bucket sort creates k buckets....and distribute n numbers in one of those buckets.. Eg.1-10, 11-20, 21-30... O(n+k)
The no.s within the bucket are sorted using insertion O(n²)
It works fine when few numbers end up in same bucket.. O(n+k) But if all numbers end up in same bucket ...O(n²)
My question is if we make range of buckets as 1 ie 0-1 ,1-2, 2-3..... Different no.s won't end up in same bucket....(no sorting within bucket required) O(n+k)
Without concerning space complexity why don't we use this instead of counting sort? Plzz correct me if I m wrong..
What you propose is a distribution sort called count sort, only a simpler version where you know that elements are not duplicated, so counting stops at 1
. It is very efficient in time O(N+n)
but does require O(N)
space.
Many people will naturally use this method when asked to sort a deck of cards: they will dispatch each card to its position on the table in order to form 4 lines of 13 cards. The final step is to gather the cards line by line. Here we have N == n
and since both steps take O(n)
time, the sort is very efficient.
When N
becomes substantially larger than n
, say you want to sort a pile of 20 dollar bills by the order of their serial numbers, this method becomes totally impractical.
If you are sorting integers, you might consider another method with O(n)
time complexity: Radix sort.