I'm trying to train my neural network with 10 epochs. But my attempts are unsuccessful. I don't get it why am I always getting something like this:
35/300 [==>...........................] - ETA: 1:09 - loss: 0.0000e+00 - accuracy: 1.0000
36/300 [==>...........................] - ETA: 1:09 - loss: 0.0000e+00 - accuracy: 1.0000
37/300 [==>...........................] - ETA: 1:08 - loss: 0.0000e+00 - accuracy: 1.0000
Here are my batch size and image width/height and whole feeding proccess:
batch_size = 32
img_height = 150
img_width = 150
dataset_url = "http://cnrpark.it/dataset/CNR-EXT-Patches-150x150.zip"
print(dataset_url)
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='CNR-EXT-Patches-150x150',
untar=True)
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
num_classes = 1
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
normalization_layer = tf.keras.layers.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
print(np.min(first_image), np.max(first_image))
model = tf.keras.Sequential([
tf.keras.layers.Rescaling(1./255),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='sigmoid'),
tf.keras.layers.Dense(num_classes)
])
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=10
)
From np.min and np.max I'm getting these values: 0.08627451 0.5568628
so this obviously wouldn't be the case. What should be wrong in my attempt?
EDIT, my epoch looks like this now:
5/300 [..............................] - ETA: 1:12 - loss: 0.1564 - accuracy: 0.9750
6/300 [..............................] - ETA: 1:15 - loss: 0.1311 - accuracy: 0.8333
7/300 [..............................] - ETA: 1:13 - loss: 0.1124 - accuracy: 0.7143
8/300 [..............................] - ETA: 1:13 - loss: 0.0984 - accuracy: 0.6250
9/300 [..............................] - ETA: 1:12 - loss: 0.0874 - accuracy: 0.5556
And a little later:
51/300 [====>.........................] - ETA: 1:04 - loss: 0.0154 - accuracy: 0.0980
You have set num_classes = 1
, although your dataset has two classes:
LABEL is 0 for free, 1 for busy.
So, if you want to use tf.keras.losses.SparseCategoricalCrossentropy
, try:
tf.keras.layers.Dense(2)
You could also consider using binary_crossentropy
if you only have two classes. You would have to change your loss function and output layer to:
tf.keras.layers.Dense(1, activation="sigmoid")