I am training a fully convolution neural network, with 3080*16 input images for training, giving 16 images in a batch. I am doing this for 100 epochs.
in every epoch:
after each batch:
calculate errors, do weight update, get confusion matrix
after each validation_batch
calculate errors and confusion matrix
I am trying to give the maximum batch size possible.
In this situation (when number of epochs is fixed) - you have the trade-off between number of updates and the quality of update. The more often you will update your network (the smaller the batch is) - the better network you might get (assuming that you are using right regularization and baby-sitting). The better approximation of a real update parameters you get (the batch size is bigger) - the faster your network might converge to the quality solution omitting changes which actually may worsen your model.
The best way to set a batch size is either research if someone already found out the best batch size for your task or a grid / random search meta optimization - where you set a reasonable values of a possible batch size and test each option in order to find the best value.