Search code examples
pythonpandasscikit-learnneural-networkconfusion-matrix

Hoy to get exactly inverse prediction results? - sklearn predicts perfectly wrong


My models are predicting perfect wrong outcomes. For a two classes clasification problem, there is a lot of false positives and false negatives. In fact I would have a nice result if I could get just the opposite ones. So I have a simple snippet like the following:

clf = neural_network.MLPClassifier(solver='lbfgs', alpha=1e-5
                                   , hidden_layer_sizes=(5, 2)
                                   , random_state=1, max_iter=5000)
clf.fit(X_train, y_train)
print('TRAIN')
print(classification_report(y_train, clf.predict(X_train)))
print(confusion_matrix(y_train, clf.predict(X_train)))
print('\nTEST')
print(classification_report(y_test, clf.predict(X_test)))
print(confusion_matrix(y_test, clf.predict(X_test)))

And the confusion matrix is something like

[[2 7]
 [8 2]]

So, I could use an output like

[[8 2]
 [2 7]]

How can I achieve this without operating directly on the results? Thanks in advance.


Solution

  • If you have a original dataframe:

    X,y
    

    and you did:

    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
    

    Then the code is correct. That means don't change anything in the output. Would you could do, run another train/test-split to see how the results are changed. You just have a bad classifier, but don't manually tune it, thats bullshit.