I am doing a binary classification using sub classing using Tensorflow
. My code is:
class ChurnClassifier(Model):
def __init__(self):
super(ChurnClassifier, self).__init__()
self.layer1 = layers.Dense(20, input_dim = 20, activation = 'relu')
self.layer2 = layers.Dense(41, activation = 'relu')
self.layer3 = layers.Dense(83, activation = 'relu')
self.layer4 = layers.Dense(2, activation = 'sigmoid')
def call(self, inputs):
x = self.layer1(inputs)
x = self.layer2(x)
x = self.layer3(x)
return self.layer4(x)
ChurnClassifier = ChurnClassifier()
ChurnClassifier.compile(optimizer = 'adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics = ['accuracy'])
Now I fitted the model:
history = ChurnClassifier.fit(X_train_nur, Y_train_nur,
epochs=20,
batch_size=512,
validation_data=(X_val_nur, Y_val_nur),
shuffle=True)
Now, I want to predict the class either 0 or 1 so I used the code - prediction = ChurnClassifier.predict(X_val_nur)
Now I want to see how many are 0 and 1 to calculate the TN, FN, TP, FP. so I crated a Dataframe for prediction. Code-
pred_y = pd.DataFrame(prediction , columns=['pred_y'])
But I am getting the following DataFrame-
My Sample X_train:
array([[2.02124594e+08, 3.63743942e+04, 2.12000000e+02, ...,
4.30000000e+01, 0.00000000e+00, 1.00000000e+00],
[4.93794595e+08, 6.66593354e+02, 4.22000000e+02, ...,
2.60000000e+01, 0.00000000e+00, 1.00000000e+00],
[7.28506124e+08, 1.17953696e+04, 1.14000000e+03, ...,
2.50000000e+01, 0.00000000e+00, 1.00000000e+00],
...,
[4.63797916e+08, 1.19273275e+03, 4.10000000e+02, ...,
9.00000000e+00, 0.00000000e+00, 1.00000000e+00],
[4.04285400e+08, 1.87350825e+04, 3.01000000e+02, ...,
1.60000000e+01, 0.00000000e+00, 1.00000000e+00],
[5.08433538e+08, 3.19289528e+03, 4.18000000e+02, ...,
9.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
My Sample y_train- array([0, 0, 0, ..., 0, 0, 0], dtype=int64)
Y_train_nur only contain 0 and 1
What's the issue??
Thanks in advance!
For Binary Classification, Last layer in model has to contain one neuron and model has to be compiled with
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True),
Revised code would be
class ChurnClassifier(Model):
def __init__(self):
super(ChurnClassifier, self).__init__()
self.layer1 = layers.Dense(20, input_dim = 20, activation = 'relu')
self.layer2 = layers.Dense(41, activation = 'relu')
self.layer3 = layers.Dense(83, activation = 'relu')
self.layer4 = layers.Dense(1)
def call(self, inputs):
x = self.layer1(inputs)
x = self.layer2(x)
x = self.layer3(x)
return self.layer4(x)
ChurnClassifier = ChurnClassifier()
ChurnClassifier.compile(optimizer = 'adam',
loss =tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics = ['accuracy'])
history = ChurnClassifier.fit(X_train_nur, Y_train_nur,
epochs=20,
batch_size=512,
validation_data=(X_val_nur, Y_val_nur),
shuffle=True)