Search code examples
algorithmheapbinomial-heap

Binomial heap: more efficient way for initial build than successive inserts?


Is there a way to initially construct a binomial heap with n given elements with worst case complexity below O(n log n), i.e. using n successive inserts? (I know that the amortized cost of insert is O(1), so the average time complexity for build is smaller.) For binary heaps there is a more efficient build implementation that puts all n elements in a binary tree and performs heapify/siftDown on the first half of the elements in reverse order. Just wondering: does something similarly clever exist for binomial heaps?


Solution

  • Actually, inserting all n values into the heap will only take time O(n). Although the worst-case runtime of a binomial heap insert is O(log n), on average it's lower than that.

    Here's one way of seeing this using an amortized analysis. Place one credit on each tree in the binomial heap. Whenever you do an insertion, if it involves merging together k different trees, the actual runtime is Θ(1 + k). We'll also spend k credits in the course of doing this, one per tree merged, so the amortized cost is O(1). Therefore any series of n insertions into an empty binomial heap, assuming there are no intervening deletions, will take time O(n). This works even if you don't know the number of elements n in advance, unlike binary heaps.

    Alternatively, you could use lazy binomial heaps, where insertions take worst-case time O(1) and deletions are amortized O(log n). In that case, a series of n insertions will take O(n) time as well.

    Hope this helps!