Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should I refer when the two training stages have converged? #22

Open
Hryxyhe opened this issue Jun 1, 2024 · 2 comments
Open

Comments

@Hryxyhe
Copy link

Hryxyhe commented Jun 1, 2024

Dear Author :
I really like your great contribution and try to utilize this framework to my task. But when I respectively train the local and global adapters, I do not know how to refer when the two training stages have converged? Since my sub-task needs pixel level performance, it is timely and have poor performance at two stages. So I would like to know what is the respective final performance of this two adapters before the joint inference?

@hhh388
Copy link

hhh388 commented Aug 23, 2024

Dear Author : 亲爱的作者: I really like your great contribution and try to utilize this framework to my task. But when I respectively train the local and global adapters, I do not know how to refer when the two training stages have converged? Since my sub-task needs pixel level performance, it is timely and have poor performance at two stages. So I would like to know what is the respective final performance of this two adapters before the joint inference?我真的很喜欢你的巨大贡献,并尝试利用这个框架来完成我的任务。 但是,当我分别训练本地和全局适配器时,我不知道如何引用两个训练阶段何时收敛?由于我的子任务需要像素级的性能,所以它是及时的,并且在两个阶段的性能都很差。所以我想知道这两个适配器在联合推理之前各自的最终性能如何?

Hello, did you solve this problem? I would also like to know what the respective final performance of this two adapters looks like,my training loss is not decreasing.

@Hryxyhe
Copy link
Author

Hryxyhe commented Aug 23, 2024

Dear Author : 亲爱的作者: I really like your great contribution and try to utilize this framework to my task. But when I respectively train the local and global adapters, I do not know how to refer when the two training stages have converged? Since my sub-task needs pixel level performance, it is timely and have poor performance at two stages. So I would like to know what is the respective final performance of this two adapters before the joint inference?我真的很喜欢你的巨大贡献,并尝试利用这个框架来完成我的任务。 但是,当我分别训练本地和全局适配器时,我不知道如何引用两个训练阶段何时收敛?由于我的子任务需要像素级的性能,所以它是及时的,并且在两个阶段的性能都很差。所以我想知道这两个适配器在联合推理之前各自的最终性能如何?

Hello, did you solve this problem? I would also like to know what the respective final performance of this two adapters looks like,my training loss is not decreasing.

Not yet. I have moved back to original ControlNet. Maybe you could try some other new open-source works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants