Search code examples
algorithmbig-ocomplexity-theorymergesortinversion

Complexity of modified MergeSort


I want to count the inversions while sorting an array with merge-sort. For that purpose I added a variable in the conditionals so that this would be incremented whenever a inversion is encountered. Pseudocode:

mergesort(M, l, r) begin
  if (l < r) then
    int m <- (l + r - 1)/2; //for rounding down I use explicitly int
    inv <- 0; //set number of inversions
    mergesort(M, l, m)
    mergesort(M, m+1, r)
    i <- l;
    j <- m + 1;
    k <- l;
    while(i <= m and j <= r) do
      if (M[i] <= M[j]) then
        M'[k] <- M[i];
        i <- i + 1;
      else 
        M'[k] <- M[j];
        j <- j + 1;
        inv <- inv + 1;  //Counting inversions
      k <- k + 1;
    for (h = i, .. , m) do
      M[k + (h - 1)] <- M[h]; 
    for (h = l, .. , k -1) do
      M[h] <- M'[h];
end.

However I am not sure whether the complexity remains the same: O(n log n).

Does incrementing of only one variable contributes to giving a worse WC complexity? As I know, it depends only on the biggest summand (n-factor). And would adding a constant or at worst (n - 1) + (n - 2) = 2n - 3 incrementations change the complexity much? If yes, what would you suggest?


Solution

  • If you look to these two lines

        j <- j + 1;
        inv <- inv + 1;  //Counting inversions
    

    They are both T(1) arithmetic operations, they are in same depth, therefore T(1)+T(1) = T(1). The extra line of inv <- inv + 1; cant change complexity, because time to execute remains constant.