-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about GPU memory #34
Comments
Hello! I'm also following this project now but I can't run the code. Are you willing to share your email with me and have some debates? |
@bravelyw I'm sorry, I need more context to figure out your memory usage. Which models are you loading? Does this happen after you load the model, or only once training starts? |
ok, share your email and I will send the code to you. |
The flow-fwd model, and throughout the training phase. |
12232132@mail.sustech.edu.cn. Thank you! |
I have sent to you. And I guess mabye the dataset is not correct. |
And I want to manually import the last traing result when OOM, but it seems no init_model_weights method in exp, how should I do? |
@bravelyw what batch size are you using? How large are your volumes? You're right, looks like init_model_weights wasn't included in this repo. You should be able to implement your own init_model_weights function that uses standard model loading code from your desired path (e.g. https://www.tensorflow.org/guide/keras/save_and_serialize) |
First thanks for your contribution. In your introduction, you said you use a GPU with 12 GB, but I run the program in a GPU with 24 GB and use nearly 18 GB. I dont know why, can you help me?
And the GPU Fan mabye is broken, but it is not the reason.
The text was updated successfully, but these errors were encountered: