I wish to convert one of the existing pre-trained mxnet models available here to a fully convolutional one.
This means being able to input an image of any size, specifying the stride, and getting a full output. For instance, assume the model was trained on 224x224x3 images. I want to input an image which is 226x226x3 and specify stride=1, in order to get a 3x3xnum-classes output. I'm not asking "theoretically", but rather for an example code :-)
Thanks!
According to this example: https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/predict_imagenet.ipynb
You can change the data shape when binding the model:
mod.bind(for_training=False, data_shapes=[('data', (1,3,226,226))])
Then you can input an 3 * 226 * 226 image.
Another example:http://mxnet.io/how_to/finetune.html
This example replaces the last layer of pre-trained model with a fc layer.