You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to notify you that, from line 74 to 78 of the answer_extractor.py file, the model is download on a local directory which does not match the standard huggingface .cache path. This causes an useless structural overhead. Moreover, the files are saved in .bin format, which also require more disk space. Note that the AutoModelForCausalLM.from_pretrained() method already takes into account the case in which the model is a local path or an huggingface model id, therefore there is no need to implement these 4 lines of code. You can safely remove them and allow HF to handle the download / load of your models.
The text was updated successfully, but these errors were encountered:
Thank you for your suggestion! The reason we specifically designed the local directory logic was to facilitate the unified management of downloaded model files. However, the issues and suggestions you mentioned are indeed valid, and we will consider further optimizing this part of the logic to enhance its robustness.
I'd like to notify you that, from line 74 to 78 of the answer_extractor.py file, the model is download on a local directory which does not match the standard huggingface .cache path. This causes an useless structural overhead. Moreover, the files are saved in .bin format, which also require more disk space. Note that the AutoModelForCausalLM.from_pretrained() method already takes into account the case in which the model is a local path or an huggingface model id, therefore there is no need to implement these 4 lines of code. You can safely remove them and allow HF to handle the download / load of your models.
The text was updated successfully, but these errors were encountered: