You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having batch inference/representations would allow faster compute for any large scale project
Detailed Description
Hello!
I am working on a large scale molecule project and have been using Uni-Mol representations, I am implementing a larger batch size for the representation from tools, but I believe you know the architecture better than me, is this something you are working on/ think of implementing in the near future?
Thank you!
Further Information, Files, and Links
No response
The text was updated successfully, but these errors were encountered:
Thank you for your suggestion!
It's a great idea to enable batch inference/representations for large-scale projects. In our earlier code, the inference batch size was fixed at 32, and this has remained the case until now.
We plan to open up this API in the near future to allow more flexibility. Additionally, we will also be adding support for DDP (Distributed Data Parallel) for inference, which should further enhance performance for large-scale tasks.
Stay tuned for these updates!
Summary
Having batch inference/representations would allow faster compute for any large scale project
Detailed Description
Hello!
I am working on a large scale molecule project and have been using Uni-Mol representations, I am implementing a larger batch size for the representation from tools, but I believe you know the architecture better than me, is this something you are working on/ think of implementing in the near future?
Thank you!
Further Information, Files, and Links
No response
The text was updated successfully, but these errors were encountered: