Search code examples
machine-learningdeep-learningvisualizationyolo

How to See Evaluation Metrics in YOLOv6?


I have the following output but can't figure out how to evaluate because there is no F1 score or confusion matrix.

Average Recall (AR) @[ IoU=0.50:0.95 | area= small |maxDets=100] = -1.000

Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.250

Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.410

20/499 0.001595 0.6697 0 1.393: 100%|██████████| 12/12 [00:

21/499 0.001594 0.6417 0 1.353: 100%|██████████| 12/12 [00:

22/499 0.001594 0.6727 0 1.431: 100%|██████████| 12/12 [00:

I trained for 400 epochs, and this is just a small part of the output. I can't see the mAP either.

I have this line to eval

!python tools/eval.py --data Fabric-Defect-2/data.yaml --weights runs/train/exp/weights/best_ckpt.pt --device 0

Is there a way to obtain detailed evaluation metrics such as F1 score, confusion matrix, and mAP?


Solution

  • Try this

    !python tools/eval.py --data Fabric-Defect-2/data.yaml --weights runs/train/exp/weights/best_ckpt.pt --device 0 do_pr_metric True --plot_confusion_matrix --plot_curve True
    

    Adding those 3 arguments gave me a confusion matrix, F1 curve, P curve, PR curve, and R curve graphs.

    These are the explanations of those arguments that i found in the eval.py script:

    • '--do_pr_metric', default=False, type=boolean_string, help='whether to calculate precision, recall and F1, n, set False to close'
    • '--plot_curve', default=True, type=boolean_string, help='whether to save plots in savedir when do pr metric, set False to close'
    • '--plot_confusion_matrix', default=False, action='store_true', help='whether to save confusion matrix plots when do pr metric, might cause no harm warning print'

    And I suggest you to also play around with the --task flag. It has 3 options ('val, test, or speed'). I haven't tried test and speed, so I don't know what the output. Play around and see which one you really need.