batch size at best model in HICO-DET #67
-
hi, Fred. Please let me know what you set hyper-parameters in your best model. You said 8 GPUs(TITAN X) with batch size of 32. This means each GPU got batch size of 4 ? or 32? I think TITAN X memory has no capacity for batch size of 32. (edited at 10/15) I have one more question about performance. I tested your pre-trained model checkpoints/scg_1e-4_b32h16e7_hicodet_e2e.pt that you provide. It is just got performance (FULL/RARE/NON-RARE) -> (24.88/16.40/25.60), but your paper give a result higher (31.33/24.72/33.31). How do I get this performance? I want to re-produce your model for SOTA performance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @roy881020, Thanks for taking an interest in our work.
The batch size for each GPU is in fact 4, which makes the effective batch size 32 with 8 GPUs.
The mAP reported in the paper can be achieved using the fine-tuned detections provided from the DRG paper. First make sure you have downloaded the fine-tuned detections following the instructions here. Then make sure you set the flag Let me know if that clears up things. Cheers, |
Beta Was this translation helpful? Give feedback.
Hi @roy881020,
Thanks for taking an interest in our work.
The batch size for each GPU is in fact 4, which makes the effective batch size 32 with 8 GPUs.
The mAP reported in the paper can be achieved using the fine-tuned detections provided from the DRG paper. First make sure you have downloaded the fine-tuned detections following the instructions here. Then make sure you set the flag
--detection-dir hicodet/detections/test2015_finedtuned_drg
when running the test.Let me know if that clears up things.
Cheers,
Fred.