What is the difference between batching your dataset with dataset.batch(batch_size)
and batching your dataset with the batch_size
parameter on the .fit
the function of your model? do they have the same functionality or are they different?
Check the documentation for the parameter batch_size
in fit
:
batch_size
Integer orNone
. Number of samples per gradient update. If unspecified,batch_size
will default to 32. Do not specify thebatch_size
if your data is in the form of datasets, generators, orkeras.utils.Sequence
instances (since they generate batches).
So, if you are passing a dataset object for training, do not use the batch_size
parameter, as that is only meant for the case where your X/Y values are NumPy arrays or TensorFlow tensors.