Replies: 1 comment 2 replies
-
@bencten confusion matrices and per-class statistics are automatically reported by test.py on final epoch, or when called directly (i.e. python test.py): See #1474 for confusion matrix: test.py verbose (per class) output is enabled automatically for small datasets, or can be forced manually with |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Unless I am misreading the code return, the function:
ap_per_class in (https://github.com/ultralytics/yolov5/blob/master/utils/metrics.py)
returns the mean AP and not a list of APs.
Goal
I am trying to plot how each individual class is doing, or rather the precision, AP50, and recall per class label as would be shown in a confusion matrix.
Currently I see wandb show the average per P, R, AP50 etc - but I'd like to log a dict of those per individual class performance to analyze the misclassification errors.
If someone has gone through the same steps to save data out, please let me know.
Beta Was this translation helpful? Give feedback.
All reactions