I have in my test dataset 5960
images:
and I got results metrics:
TP: 5116.0
FP: 794.0
TN: 5116.0
FN: 794.0
len(testX) = 5960
One epoch's log:
185/185 [==============================] - 1s 6ms/step - loss: 0.4095 -
tp: 5127.0000 - fp: 783.0000 -
tn: 5127.0000 - fn: 783.0000 -
accuracy: 0.8675 - precision: 0.8675 -
recall: 0.8675 - auc: 0.9200
Load images:
label = 1 if label == "positive" else 0
...
(trainX, testX, trainY, testY) = train_test_split(data, labels,
test_size=0.2, random_state=42)
# convert the labels from integers to vectors
testY = to_categorical(testY, num_classes=2)
I used keras.metrics and I have only two labels: (0 and 1). What did I wrong?
loss, tp, fp, tn, fn, accuracy, precision, recall, auc = model.evaluate(testX, testY, verbose=1)
I think when image1 have label '1',
My model:
model.add(Conv2D(20, (5, 5), padding="same",
input_shape=inputShape))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# second set of CONV => RELU => POOL layers
model.add(Conv2D(50, (5, 5), padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
model.add(Dropout(0.05))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
When you apply one-hot-encoding for binary classification these metrics mess up. Here is an example:
Your labels looks like this after one hot encoding: [ ... [1,0], [0,1], [1,0] ... ]
If you pay attention your TP
equals TN
. When it is correctly predicted as class 0, it is TP
for class 0, also TN
for class 1. That's why they are equal.
Don't apply one hot encoding and change:
model.add(Dense(1))
model.add(Activation("sigmoid"))
Also loss should be binary_crossentropy
.