While training with TensorFlow Object Detection
API, I am getting the accumulated evaluation result always 0. Following is the corresponding verbose that I got:
Accumulating evaluation results...
DONE (t=1.51s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Why this is happening? As TFOD documentation I provided the train.record
and 'test.record' files to the model. But didn't provide any validation or evaluation data-set or something like that separately as I didn't find anything as a requirement like this. Is it the reason for this?
Additionally here is the training command:
!python /content/models/research/object_detection/model_main.py \
--pipeline_config_path={pipeline_fname} \
--model_dir={model_dir} \
--alsologtostderr \
--num_train_steps={num_train_steps} \
--num_eval_steps={num_eval_steps}
Here, I set the following values to these variables:
Additional information:
Out of range: End of sequence
and which was occuring probably for another error calledTypeError: 'numpy.float64' object cannot be interpreted as an integer
. Both of them were fixed by downgrading NumPy version.EDIT: Apart from this I have found another problem. While testing the output doesn't show any bounding boxes on test images.
Alright! I have found the suspicious reason behind this. The problem is probably with the pre-trained model. Initially I was using ssd_mobilenet_v1_coco_2017_11_17
as a pretrained model and getting the error. Therefore when I changed the model the error was gone. Currently I am using ssd_mobilenet_v2_coco_2018_03_29
which is an updated version of it. Following is a portion of verbose which shows hopefully now everything is going okay:
Accumulating evaluation results...
DONE (t=1.05s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.329
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.835
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.111
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.200
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.298
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.392
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.206
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.439
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.441
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.274
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.409
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.502
Edit: The problem of not showing the bounding boxes was also fixed automatically after changing the pre-trained model as mentioned.