Hello guys im getting ValueError when training the model using model.fit().. i tried many ways to solve it but did not work. Take a look.. However i did resize all the images to (512, 512)
................
................
................
def resizing(image, label):
image = tf.image.resize(image, (512, 512))/255.0
return image, label
mapped_training_set = train_set.map(resizing)
mapped_testing_set = test_set.map(resizing)
mapped_valid_set = valid_set.map(resizing)
tf.keras.layers.Conv2D(32, (3, 3), input_shape=(512, 512, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
.........
.........
.........
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(101, activation="softmax")
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
hist = model.fit(mapped_training_set,
epochs=10,
validation_data=mapped_valid_set,
)
**I'm getting this error: **
<ipython-input-31-1d134652773c> in <module>()
1 hist = model.fit(mapped_training_set,
2 epochs=10,
----> 3 validation_data=mapped_valid_set,
4 )
16 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
ValueError: in converted code:
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py:677 map_fn
batch_size=None)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py:2410 _standardize_tensors
exception_prefix='input')
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py:573 standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_32_input to have 4 dimensions, but got array with shape (512, 512, 3)
I tried to search to fix the error and now it has been more than 2 hours and i did not find an answer..
All the results and the solutions that i found was not on my topic.
Please help i'm stuck here.
Thanks in advance
You need to pass your model an input shape of (batch_size, height, width, channels)
. That's why it says that it expects 4 dimensions. Instead you are passing it a single image of (512, 512, 3)
.
If you want to train your model on single images you should change the shape of each one via image = tf.expand_dims(image, axis=0)
. This can be done in the resize
function.
If you want to train your model in batches you should add mapped_training_set
= mapped_training_set.batch(batch_size)
after the map
. Then the same thing for the other two datasets.