Search code examples
scikit-learndata-sciencemulticlass-classification

Which method should be considered to evaluate the imbalanced multi-class classification?


I am working on multiclass-imbalanced data. My dependent variable is highly skewed.

          Injury

       2(No Injury)              208753
       1(Medium Injury)       22318
       0(severe Injury)            3394

I have used random forest algorithm with parameter "class_weight='balanced' " to manage the class 2 imbalance.

I get the below results when I use average='micro'.

       [[   34   107   688]
        [  148   778  4592]
        [  905  4635 46730]]
        Accuracy Score: 0.8110616374089428
        precision score: 0.8110616374089428
        Recall score: 0.8110616374089428
        AUC Score: 0.8582962280567071
        F1 score: 0.8110616374089428
        Kappa Score: 0.05522284663052324 

For the average = 'macro', the results are below.

        [[   31   125   684]
         [  157   838  4559]
         [  890  4694 46639]]
         Accuracy Score: 0.8104816009007626
          precision score: 0.3586119227436326
          Recall score: 0.3602869806251181
         AUC Score: 0.5253225798824679
         F1 score: 0.3592735337079687
         Kappa Score: 0.06376296115668922

So, which results should I consider to evaluate the model? If I have to consider the macro, then my model performance is really bad. Please suggest if there are any methods to improve the precision, recall and AUC score?

If I consider micro results, my precision, recall, f1 score is same. How can I justify this in the project?

Any help would be appreciated.

Thank you.


Solution

  • As with most data science related questions the answer to "which one is better" boils down to "it depends". Is it important to have good performance for each class individually? Or are you more concerned with getting good overall performance?

    When you set average='micro' you are measuring the overall performance of the algorithm across the classes. For example, to calculate the precision you would add all your true positive predictions and divide by all true positives and all false positives, which using your data would be:

    (34 + 778 + 46730) / ((34 + 778 + 46730) + (157 + 890 + 125 + 4694 + 688 + 4592))
    

    The result is 0.81106. When you look at the details, however, you notice that for each of your classes there is a wide variation in the precision calculations within each class and that the precision calculation is largely being driven by the No Injury class:

    Severe Injury = 0.0312
    Medium Injury = 0.1409
    No Injury     = 0.8985
    

    When you set average='macro' you're averaging the precision calculations of each class together and removing the influence of the imbalanced classes. Using the calculated class precisions above your overall precision when average='macro' would be:

    (0.0312 + 0.1409 + 0.8985) / 3 = 0.356
    

    Notice here that the inputs are the precision calculations for each individual class and that they are each weighted equally. Because the No Injury and Medium Injury classes have much lower precision scores and since you're removing the influence of the imbalanced classes this macro precision will be lower.

    So, what one is better depends on what is important to you and your use case. If you concerned with making sure the most cases, regardless of class, are assigned to the correct class then average='micro' is the metric to use, but note that by doing this the result will be overwhelmed by a single class in your example. If either the "Severe" or "Medium" categories are of most importance then you probably would not want to assess your model using average='micro' since a high level of performance will be shown overall even with poor results for those classes on their own.