Search code examples
pythonmachine-learningscikit-learnnaivebayes

Compute yScore of Learning Algorithm


I'm quite new to the ML python environment, I need to plot the precision/recall graph, as stated in this post: [https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html][1] you need to compute the y_score :

    # Create a simple classifier
classifier = svm.LinearSVC(random_state=random_state)
classifier.fit(X_train, y_train)
y_score = classifier.decision_function(X_test)

So the question is: how can I compute the score using Multinomial NaiveBayes or LearningTree? In my code I have:

 print("MultinomialNB - countVectorizer")

    xTrain, xTest, yTrain, yTest=countVectorizer(db)

    classifier = MultinomialNB()
    model = classifier.fit(xTrain, yTrain)
    yPred = model.predict(xTest)

    print("confusion Matrix of MNB/ cVectorizer:\n")
    print(confusion_matrix(yTest, yPred))
    print("\n")
    print("classificationReport Matrix of MNB/ cVectorizer:\n")
    print(classification_report(yTest, yPred))

    elapsed_time = time.time() - start_time
    print("elapsed Time: %.3fs" %elapsed_time) 

Plot function:

def plotLearningAlgorithm(yTest,yScore,algName):

    precision, recall, _ = precision_recall_curve(yTest, yScore)

    plt.step(recall, precision, color='b', alpha=0.2,
         where='post')
    plt.fill_between(recall, precision, alpha=0.2, color='b', **step_kwargs)

    plt.xlabel('Recall')
    plt.ylabel('Precision')
    plt.ylim([0.0, 1.05])
    plt.xlim([0.0, 1.0])
    plt.title('2-class Precision-Recall'+ algName +'curve: AP={0:0.2f}'.format(average_precision))

Error with plot:

<ipython-input-43-d07c3365bfc2> in MultinomialNaiveBayesOPT()
     11     yPred = model.predict(xTest)
     12 
---> 13     plotLearningAlgorithm(yTest,model.predict_proba(xTest),"MultinomialNB - countVectorizer")
     14 
     15     print("confusion Matrix of MNB/ cVectorizer:\n")

<ipython-input-42-260aac9918f2> in plotLearningAlgorithm(yTest, yScore, algName)
      1 def plotLearningAlgorithm(yTest,yScore,algName):
      2 
----> 3     precision, recall, _ = precision_recall_curve(yTest, yScore)
      4 
      5     step_kwargs = ({'step': 'post'}

/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/ranking.py in precision_recall_curve(y_true, probas_pred, pos_label, sample_weight)
    522     fps, tps, thresholds = _binary_clf_curve(y_true, probas_pred,
    523                                              pos_label=pos_label,
--> 524                                              sample_weight=sample_weight)
    525 
    526     precision = tps / (tps + fps)

/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/ranking.py in _binary_clf_curve(y_true, y_score, pos_label, sample_weight)
    398     check_consistent_length(y_true, y_score, sample_weight)
    399     y_true = column_or_1d(y_true)
--> 400     y_score = column_or_1d(y_score)
    401     assert_all_finite(y_true)
    402     assert_all_finite(y_score)

/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py in column_or_1d(y, warn)
    758         return np.ravel(y)
    759 
--> 760     raise ValueError("bad input shape {0}".format(shape))
    761 
    762 

ValueError: bad input shape (9000, 2)

Where db contains my dataset already divided between train set and test set. Any suggestions?

Solution:

def plot_pr(y_pred,y_true,l):
    precision, recall, thresholds = precision_recall_curve(y_true, y_pred,pos_label=l)
    return precision,recall


def plotPrecisionRecall(xTest,yTest,yPred,learningName,model):
    yPred_probability = model.predict_proba(xTest)
    yPred_probability = yPred_probability[:,1];
    no_skill_probs = [0 for _ in range(len(yTest))]
    ns_precision,ns_recall,_=precision_recall_curve(yTest,no_skill_probs,pos_label="L")
    precision, rec= plot_pr(yPred_probability,yTest,"L");
    plt.title(learningName)
    plt.plot(ns_recall,ns_precision,linestyle='--',label='No Skill')
    plt.plot(rec,precision,Label='Skill')
    plt.xlabel("Recall")
    plt.ylabel("Precision")
    plt.legend()
    plt.show()

So as It turns out the y_Pred needed to be transformed with:

yPred_probability = yPred_probability[:,1];

So big thank you to @ignoring_gravity to providing me to the right solution, I've also printed the no-skill line to extra readability to the graph.


Solution

  • What they call y_score is just the predicted probabilities outputted by your ML algorithm.

    In multinomial nb and in a decision tree (I suppose that's what you mean by LearningTree?), you can do this with the method .predict_proba:

        classifier = MultinomialNB()
        model = classifier.fit(xTrain, yTrain)
        yPred = model.predict_proba(xTest)