I'm using tensorflow and keras 2.8.0 version.
I have the following network:
#defining model
model=Sequential()
#adding convolution layer
model.add(Conv2D(256,(3,3),activation='relu',input_shape=(256,256,3)))
#adding pooling layer
model.add(MaxPool2D(2,2))
#adding fully connected layer
model.add(Flatten())
model.add(Dense(100,activation='relu'))
#adding output layer
model.add(Dense(len(classes),activation='softmax'))
#compiling the model
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
#fitting the model
model.fit(x_tr,y_tr,epochs=epochs, )
# Alla 12-esima epoca, va a converge a 1
# batch size è 125 credo, non so il motivo
#evaluting the model
loss_value, accuracy = model.evaluate(x_te, y_te)
#loss_value, accuracy, top_k_accuracy = model.evaluate(x_te, y_te, batch_size=batch_size)
print("loss_value: " + str(loss_value))
print("acuracy: " + str(accuracy))
#predict first 4 images in the test set
ypred = model.predict(x_te)
The point is that now i'm trying to save the model in ".h5" format but if i train it for 100 epochs or for 1 epochs i will get a 4.61Gb file model.
Why the size of this file is that big? How can i reduce this model size ?
What I find out, after 5 months of experience, is that the steps to do in order to reduce the model size, improve the accuracy score and reduce the loss value are the following:
tf.data.Dataset.from_tensor_slices((x, y)).batch(32 , drop_remainder=True)
of course it should be done for train,test,validationHope that it helps