-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Have you overcome overfitting problem] #3
Comments
Hi, I did not solve the issue. I tried with around 20k audio clips for 2 person speech separation only. I would assume now that more data than that would be required. I did not experiment too much with the model, simply because the time it took for training 1 epoch was always in 1-2 day range, so couple of epochs would take weeks. This would change depending upon your GPU and VRAM availability. I would say more than 16GB would be helpful. So, a lot of opportunity there to tweak the model. I also found this. It could be helpful. |
Hi, thanks for your quick reply. |
You mean to say while preparing the dataset? Well, I've seen someone mention that here. However, adding additional noise like AudioSet might help regularise. |
Yeah, thanks, |
Yes, you could make out in certain instances who was the main speaker in the separated output. But, not always. Sometimes, it was only noise or mix of both the speakers. For the most part the output was noisy. All of this was also applicable to training data, but not to a great extent. As I said a lot of time is required for a model/dataset this big. |
Probably related to #4 |
Hi @vitrioil |
Hi @MordehayM , I believe it was 20k unique clips. 200C2 is indeed 19k, however not all combinations are considered. There is a parameter: REMOVE_RANDOM_CHANCE (in audio_mixer_generator.py). This will prevent from combination from blowing up, otherwise there will be a lot of files created. By default the value is 0.9 Hence, I was not taking all combinations of files. |
Hi there,
@vitrioil Just want to ask have you overcame the overfitting problem that you reported in README?
Thanks, Do you have any idea of your overfitting? and any idea to overcome it?
how much data you train on? thanks
The text was updated successfully, but these errors were encountered: