I want to feed images with the shape (160,320,3) to
VGG16(input_tensor=input_tensor, include_top=False)
How can I include a layer that reshapes the images to the shape expected by the VGG16 model, which is (224,224,3) ?
VGG16
model in itself is just a set of weights of the fixed sequence of layers and fixed convolution kernel sizes etc. That doesn't mean that those convolution kernels cannot be applied to images of other sizes.
For example in your case:
from keras.models import Model
from keras.layers import Dense,Flatten
from keras.applications import vgg16
from keras import backend as K
model = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(160,320,3))
model.summary(line_length=150)
flatten = Flatten()
new_layer2 = Dense(10, activation='softmax', name='my_dense_2')
inp2 = model.input
out2 = new_layer2(flatten(model.output))
model2 = Model(inp2, out2)
model2.summary(line_length=150)
According to here the minimum image size can be 48x48x3
anything above than that is fine.
Now its true the original weights were learnt on 224,224,3
shaped images but the filters weights act as very good starting point for new tasks with new set of images. You do need to re-train the network but the network would converge very quickly. This is the basis of transfer learning.