Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior when evaluation based on bounding box predictions #3187

Open
ascCognify opened this issue Feb 12, 2025 · 0 comments
Open

Comments

@ascCognify
Copy link

📚 The doc issue

When evaluating Top-Down models using bounding box predictions generated by an Object Detector, the following happens during validation/test phase:

  1. No keypoints are visualized in the ground truth images.
  2. The AP Coco Metric is close to 0.

To reproduce this problem, you can simply use the td-hm_ViTPose-large_8xb64-210e_coco-256x192.py config file. When setting bbox_file=None, everything works as expected (visible keypoints and reasonable AP scores).

What is the expected behavior of setting a bbox_file?

Suggest a potential alternative/fix

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant