Search code examples
deep-learningconv-neural-networkclassification

Why Is accuracy so different when I use evaluate() and predict()?


I have a Convolutional Neural Network, and it's trying to resolve a classification problem using images (2 classes, so binary classification), using sigmoid.

To evaluate the model I use:

from tensorflow.keras.preprocessing.image import ImageDataGenerator

path_dir = '../../dataset/train'
parth_dir_test = '../../dataset/test'

datagen = ImageDataGenerator(
                    rescale=1./255,
                    validation_split = 0.2)

test_set = datagen.flow_from_directory(parth_dir_test,
                                        target_size= (150,150),
                                        batch_size = 64,
                                        class_mode = 'binary')

score = classifier.evaluate(test_set, verbose=0)


print('Test Loss', score[0])
print('Test accuracy', score[1])

And it outputs: enter image description here

When I try to print the classification report I use:

yhat_classes = classifier.predict_classes(test_set, verbose=0)
yhat_classes = yhat_classes[:, 0]

print(classification_report(test_set.classes,yhat_classes))

But now I get this accuracy: enter image description here

If I print the test_set.classes, it shows the first 344 numbers of the array as 0, and the next 344 as 1. Is this test_set shuffled before feeding into the network?


Solution

  • I needed to add a shuffle=False. The code that work is:

    test_set = datagen.flow_from_directory(parth_dir_test,
                                            target_size=(150,150),
                                            batch_size=64,
                                            class_mode='binary',
                                            shuffle=False)