I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad...
I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)...
Thanks for your help !
You need to keep training images and images to test on of roughly same dimension. If you are using random resizing as data augmentation, you can vary the test images by roughly that factor.
Best way to deal with this problem is to crop large image into images of same dimension as used in training and then use Non-maximum suppression on crops to merge the prediction.
That way, If your smallest object to detect is of size 50px, you can have training images of size ~500px.