I used the ResNet-18 from PyTorch to predict an image. I've read that (224, 224)
is the image size for this model. But when I tried to resize the image to (124, 124)
or (324, 324)
, it's still working. Anybody can tell me why?
The implementation of ResNet variants on PyTorch comes with an AdaptiveAvgPool2d
layer before the fully connected layer, ensuring that the output features are always of the correct shape for the fully connected layer, regardless of input size.
In addition, the input size of 224x224
is recommended to prevent suboptimal amounts of padding.