Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shouldn't fix_noise_scheduler_betas_for_zero_terminal_snr be applied BEFORE prepare_scheduler_for_custom_training? #1905

Open
67372a opened this issue Jan 27, 2025 · 2 comments
Labels
help wanted Extra attention is needed

Comments

@67372a
Copy link

67372a commented Jan 27, 2025

Hello,

Reference code:

def get_noise_scheduler(self, args: argparse.Namespace, device: torch.device) -> Any:

I noticed that fix_noise_scheduler_betas_for_zero_terminal_snr modifies the alphas_cumprod for the noise scheduler, and prepare_scheduler_for_custom_training uses alphas_cumprod to calculate all_snr. The issue with the current order is that all_snr will reflect NON-ztsnr snrs, not ztsnr, causing anything that uses all_snr to use snrs that do not align with ztsnr.

As such, shouldn't fix_noise_scheduler_betas_for_zero_terminal_snr always be applied before prepare_scheduler_for_custom_training?

Thank you.

@67372a 67372a changed the title Shouldn't fix_noise_scheduler_betas_for_zero_terminal_snr be applyed BEFORE prepare_scheduler_for_custom_training? Shouldn't fix_noise_scheduler_betas_for_zero_terminal_snr be applied BEFORE prepare_scheduler_for_custom_training? Jan 27, 2025
@67372a
Copy link
Author

67372a commented Jan 27, 2025

sd_snr.ods

Here is a sheet with both snrs dumped from the cache, can see they start varying significantly as they approach timestep 999.

@kohya-ss kohya-ss added the help wanted Extra attention is needed label Jan 27, 2025
@67372a
Copy link
Author

67372a commented Jan 30, 2025

To clarify impact, all_snr is used by the following standard features that I know of:

  • min snr gamma
  • debiased loss
  • scale_v_prediction_loss_like_noise_prediction
  • v_prediction_like_loss

In addition, for those using something like my personal fork:

  • edm2 loss weighting
  • sangoi loss
  • laplace timestep sampling
  • Any snr based scheduling that uses the all_snr cache

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants