Search code examples
classificationmetricsmultilabel-classificationmulticlass-classification

How to calculate Imbalance Accuracy Metric in multi-class classification


I am sorry to bother, but I found an interesting article "Mortaz, E. (2020). Imbalance accuracy metric for model selection in multi-class imbalance classification problems. Knowledge-Based Systems, 210, 106490" (https://www.sciencedirect.com/science/article/pii/S0950705120306195) and there they calculate this measure (IAM) (the formula is in the paper, and I understood it), but I would like to ask: how can I replicate it on R?

I apologise in advance for the dumb question. Thank you for your attention!


Solution

  • The IAM formula provided in the article is: IAM formula

    Where cij is the element (i,j) in the classifier's confusion matrix (c). k is referred to the number of classes in the classification (k>=2). It is shown that this metric can be used as a solo metric in multi-class model selection.

    The code to implement the IAM (Imbalance Accuracy Metric) in python provided below:

    def IAM(c):
      '''
      c is a nested list presenting the confusion matrix of the classifier (len(c)>=2)
      '''
      l  = len(c)
      iam = 0
    
      for i in range(l):
          sum_row = 0
          sum_col = 0
          sum_row_no_i = 0
          sum_col_no_i = 0
          for j in range(l):
              sum_row += c[i][j]
              sum_col += c[j][i]
              if j is not i:
                  sum_row_no_i += c[i][j] 
                  sum_col_no_i += c[j][i]
          iam += (c[i][i] - max(sum_row_no_i, sum_col_no_i))/max(sum_row, sum_col)
      return   iam/l
    
    c = [[2129,   52,    0,    1],
         [499,   70,    0,    2],
         [46,   16,    0,   1],
         [85,   18,    0,   7]]
    
    IAM(c) = -0.5210576475801445
    

    The code to implement the IAM (Imbalance Accuracy Metric) in R provided below:

    IAM <- function(c) {
    
     # c is a matrix representing the confusion matrix of the classifier.
    
      l <- nrow(c)
      result = 0
      
      for (i in 1:l) {
      sum_row = 0
      sum_col = 0
      sum_row_no_i = 0
      sum_col_no_i = 0
    
        for (j in 1:l){
              sum_row = sum_row + c[i,j]
              sum_col = sum_col + c[j,i]
              if(i != j)  {
                  sum_row_no_i = sum_row_no_i + c[i,j] 
                  sum_col_no_i = sum_col_no_i + c[j,i]
              }
        }
        result = result + (c[i,i] - max(sum_row_no_i, sum_col_no_i))/max(sum_row, sum_col)
      }
      return(result/l)
    }
    
    c <- matrix(c(2129,52,0,1,499,70,0,2,46,16,0,1,85,18,0,7), nrow=4, ncol=4)
    
    IAM(c) = -0.5210576475801445
    

    Another example from the iris dataset (3 class problem) and sklearn:

    from sklearn.datasets import load_iris
    from sklearn.linear_model import LogisticRegression
    from sklearn.metrics import confusion_matrix
    
    X, y = load_iris(return_X_y=True)
    clf = LogisticRegression(max_iter = 1000).fit(X, y)
    pred = clf.predict(X)
    c = confusion_matrix(y, pred)
    print('confusion matrix:')
    print(c)
    print(f'accuarcy : {clf.score(X, y)}')
    print(f'IAM : {IAM(c)}')
    
    confusion matrix:
    [[50  0  0]
     [ 0 47  3]
     [ 0  1 49]]
    accuarcy : 0.97
    IAM : 0.92