Search code examples
pythonmachine-learningprecision-recall

Calculate Precision and Recall from CSV multiclass datasets.


I need to calculate precision and recall from a CSV that contain a multiclass classification.

To be more specific, my csv is structured as follow:

real_class1, classified_class1
real_class2, classified_class3
real_class3, classified_class4
real_class4, classified_class2

In total there are six class classified.

In the binary example I have no problem to understand how calculate True Positive, False Positive, True Negative and False Negative. But with a multi-class I don't know how proceed.

Can someone show me some example? Possibly in python?


Solution

  • As suggested in the comment, you have to create the confusion matrix and follow this steps:

    (I'm assuming that you are using spark in order to have better performance with machine learning processing)

    from __future__ import division
    import pandas as pd
    import numpy as np
    import pickle
    from pyspark import SparkContext, SparkConf
    from pyspark.sql import SQLContext, functions as fn
    from sklearn.metrics import confusion_matrix
    
    def getFirstColumn(line):
        parts = line.split(',')
        return parts[0]
    
    def getSecondColumn(line):
        parts = line.split(',')
        return parts[1]
    
    # Initialization
    conf= SparkConf()
    conf.setAppName("ConfusionMatrixPrecisionRecall")
    
    sc = SparkContext(conf= conf) # SparkContext
    sqlContext = SQLContext(sc) # SqlContext
    
    data = sc.textFile('YOUR_FILE_PATH') # Load dataset
    
    y_true = data.map(getFirstColumn).collect() # Split from line the class
    y_pred = data.map(getSecondColumn).collect() # Split from line the tags
    
    confusion_matrix = confusion_matrix(y_true, y_pred)
    print("Confusion matrix:\n%s" % confusion_matrix)
    
    # The True Positives are simply the diagonal elements
    TP = np.diag(confusion_matrix)
    print("\nTP:\n%s" % TP)
    
    # The False Positives are the sum of the respective column, minus the diagonal element (i.e. the TP element
    FP = np.sum(confusion_matrix, axis=0) - TP
    print("\nFP:\n%s" % FP)
    
    # The False Negatives are the sum of the respective row, minus the         diagonal (i.e. TP) element:
    FN = np.sum(confusion_matrix, axis=1) - TP
    print("\nFN:\n%s" % FN)
    
    num_classes = INTEGER #static kwnow a priori, put your number of classes
    TN = []
    
    for i in range(num_classes):
        temp = np.delete(confusion_matrix, i, 0)    # delete ith row
        temp = np.delete(temp, i, 1)  # delete ith column
        TN.append(sum(sum(temp)))
    print("\nTN:\n%s" % TN)
    
    
    
    
    precision = TP/(TP+FP)
    recall = TP/(TP+FN)
    
    print("\nPrecision:\n%s" % precision)
    
    print("\nRecall:\n%s" % recall)