Search code examples
pythonmachine-learningmnist

why the confusion_matrix is different when I execute it again?


I wonder why the confusion_matrix changes as I execute it in a second time and whether it is avoidable. To be more exact, I got [[53445 597] [958 5000]] for the first time, however, I get [[52556 1486][805 5153]] when I execute it again.

# get the data from dataset and split into training-set and test-set
mnist = fetch_openml('mnist_784')
X, y = mnist['data'], mnist['target']
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
# make the data random
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]  
# true for all y_train='2', false for all others
y_train_2 = (y_train == '2')    
y_test_2 = (y_test == '2')

# train the data with a label of T/F depends on whether the data is 2
# I use the random_state as 0, so it will not change, am I right?
sgd_clf = SGDClassifier(random_state=0)
sgd_clf.fit(X_train, y_train_2)

# get the confusion_matrix
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_2, cv=3)
print('confusion_matrix is', confusion_matrix(y_train_2, y_train_pred))

Solution

  • You are using different data on each run (shuffle_index) - so there is no reason for the ML run and resulting confusion matrix to be exactly the same - though results should be close if the algorithm is doing a good job.

    To get rid of the randomness either specify indices:

    shuffle_index = np.arange(60000) #Rather "not_shuffled_index"
    

    Or use the same seed:

    np.random.seed(1) #Or any number
    shuffle_index = np.random.permutation(60000) #Will be the same for a given seed