You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello!
I am currently working through the OpenOOD v1.5 paper and wondering how the datasets were classified as Near-OOD vs Far-OOD vs Covariate shifted. Were there any quantitative metrics or statistical techniques like PCA used to classify these datasets as such?
For instance, in the OpenOOD v1.5 paper how was NINCO determined to be a Near-OOD dataset. By visual inspection, it seems that it has a “greater” semantic shift and a “lesser” covariate shift when compared to ImageNet. Were there any metrics used to determine the threshold for “greater” or “lesser” in this case?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello!
I am currently working through the OpenOOD v1.5 paper and wondering how the datasets were classified as Near-OOD vs Far-OOD vs Covariate shifted. Were there any quantitative metrics or statistical techniques like PCA used to classify these datasets as such?
For instance, in the OpenOOD v1.5 paper how was NINCO determined to be a Near-OOD dataset. By visual inspection, it seems that it has a “greater” semantic shift and a “lesser” covariate shift when compared to ImageNet. Were there any metrics used to determine the threshold for “greater” or “lesser” in this case?
Thanks for your time and consideration!
Beta Was this translation helpful? Give feedback.
All reactions