Replies: 1 comment
-
In AI in general you will usually see graceful degradation of outputs on degrading inputs. There are usually no hard fails that are not explicit coding errors. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there,
I am currently doing some experiments on the influence of annotation accuracy on yolov5 model performance.
I know the recommendation is the most accurate bounding box (BB) possible, but sometime this is hard to achieve.
That's why I artificially manipulated my BB to evaluate the impact of annotation inaccuracies. It seems like the model adapts quite well to bigger BB much more than I expected. I wanted to discuss if you see any reasons for that. Is there a hyperparameter that adapts to mean BB size that I am not aware of or something like that?
Are there other things of model architecture that need to be considered in this question?
I welcome all thoughts and would like to get more understanding of yolo architecture.
Thanks a lot.
Beta Was this translation helpful? Give feedback.
All reactions