You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello!
I have several mask rcnn models whose trained with enough large datasets including high-resolution images. My models are large, about 170-270MB.
I would like to know how I can optimize my model and the best practices for using large models in production. I need to do several recognitions step by step and it takes about 1 minute. I need to scale my server however I don't get how I can create new instances quickly and save the speed of recognition. If I will save models outside the server I need to download about 1GB for recognition.
I will be glad to hear any piece of advice.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello!
I have several mask rcnn models whose trained with enough large datasets including high-resolution images. My models are large, about 170-270MB.
I would like to know how I can optimize my model and the best practices for using large models in production. I need to do several recognitions step by step and it takes about 1 minute. I need to scale my server however I don't get how I can create new instances quickly and save the speed of recognition. If I will save models outside the server I need to download about 1GB for recognition.
I will be glad to hear any piece of advice.
Beta Was this translation helpful? Give feedback.
All reactions