-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model not getting trained on single GPU #32
Comments
We train the model for 150 epochs. 38th epoch might be just warm-up. Maybe you can try to load some pretrained weights to accelerate the training? |
@aryanmangal769 bro! How to train the model on One GPU? |
@me I add |
It is not related to the port. Make --nproc_per_node=1 pls |
I set --nproc_per_node=1, but I am still getting the error torch.distributed.elastic.multiprocessing.errors.ChildFailedError. How can I resolve this issue?thank your reply |
Maybe you should update the torch version, torch 1.6+cuda10.1 doesn't support the latest graphics cards |
Thank you very much for your reply! I can run it now! |
I tried single GPU, dual GPU, batch=2,4 respectively and none of them could train effectively. But when I use 4 GPUs, batch=2 there is no case of Loss not decreasing. |
How much does your loss have to decrease before it stops decreasing? thanks! |
Before I used 4gpu the loss would float around 32, 33 and the accuracy was poor, I then tried the weights posted by the author on a subset and the loss was around 15, you can use this as a reference |
thanks! |
I also have problems using a single GPU, so is it a problem with the number of GPUs used? |
hello,I tried 2 GPUs and increasing the batchsize and it didn't work, but I was able to train fine with 4 GPUs |
Thank you for your reply. If possible, I would like to add a WeChat account to communicate. My WeChat ID is zhl15042182325 |
When I try to train on single GPU, the error keeps on increasing and I cannot see any good results even thill 38th epoch.
train_class_error starts from 97.88 and deom 19th to 37th epoch its consistently 100. Can you debug this?
Please let me know if you need some more information
The text was updated successfully, but these errors were encountered: