Search code examples
python-2.7scikit-learnknn

Getting same value for Precision and Recall (K-NN) using sklearn


Updated question: I did this, but I am getting the same result for both precision and recall is it because I am using average ='binary'?

But when I use average='macro' I get this error message:

Test a custom review messageC:\Python27\lib\site-packages\sklearn\metrics\classification.py:976: DeprecationWarning: From version 0.18, binary input will not be handled specially when using averaged precision/recall/F-score. Please use average='binary' to report only the positive class performance.
'positive class performance.', DeprecationWarning)

Here is my updated code:

path = 'opinions.tsv'
data = pd.read_table(path,header=None,skiprows=1,names=['Sentiment','Review'])
X = data.Review
y = data.Sentiment
#Using CountVectorizer to convert text into tokens/features
vect = CountVectorizer(stop_words='english', ngram_range = (1,1), max_df = .80, min_df = 4)
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=1, test_size= 0.2)
#Using training data to transform text into counts of features for each message
vect.fit(X_train)
X_train_dtm = vect.transform(X_train) 
X_test_dtm = vect.transform(X_test)




#Accuracy using KNN Model
KNN = KNeighborsClassifier(n_neighbors = 3)
KNN.fit(X_train_dtm, y_train)
y_pred = KNN.predict(X_test_dtm)
print('\nK Nearest Neighbors (NN = 3)')



#Naive Bayes Analysis
tokens_words = vect.get_feature_names()
print '\nAnalysis'
print'Accuracy Score: %f %%'% (metrics.accuracy_score(y_test,y_pred)*100)
print "Precision Score: %f%%" %  precision_score(y_test,y_pred, average='binary')
print "Recall Score: %f%%" %  recall_score(y_test,y_pred, average='binary')

By using the code above I get same value for precision and recall.

Thank you for answering my question, much appreciated.


Solution

  • To calculate precision and recall metrics, you should import the according methods from sklearn.metrics.

    As stated in the documentation, their parameters are 1-d arrays of true and predicted labels:

    from sklearn.metrics import precision_score
    from sklearn.metrics import recall_score
    
    y_true = [0, 1, 2, 0, 1, 2]
    y_pred = [0, 2, 1, 0, 0, 1]
    
    print('Calculating the metrics...')
    
    recision_score(y_true, y_pred, average='macro')
    >>> 0.22
    
    recall_score(y_true, y_pred, average='macro')
    >>> 0.33