In the Tensorflow object detection API have the model config files for training, this config file has min_scale
and max_scale
for detection object that are set to 0.2 and 0.95 respectively by default,
I have some question about these parameters:
These params are for detecting the size of objects?
Well, yes and no. Those parameters are inside the ssd_anchor_generator
definition, which is itself an anchor_generator
. That part of the system takes care of providing some anchor boxes for the further box prediction.
If we set the input size of network=300x300 and min_scale=0.2, then the network is not able to detect the objects that have size smaller than 300x0.2 = 60 pixels?
No. The size of a detectable object is not just related to the min_scale
(which only affects anchor generation), but instead is affected by, for example, data the network was trained on, network depth, etc.
As far as you know, the ssd_mobilenet_v2_coco has the problem for detecting the small objects, If we set the min_scale = 0.05 and train the network on small objects with the same model, Is it possible to detect small objects with size 300x0.05 = 15 pixels?
Maybe? That depends entirely on your data. Modifying the min_scale
parameter might help (and indeed it might make sense to select another range for those parameters), but experimentation with your data is necessary.