Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about "l1 loss" between pred outcome and all one tensor in trainer #1

Open
YoungQuasimodo opened this issue Sep 25, 2024 · 0 comments

Comments

@YoungQuasimodo
Copy link

YoungQuasimodo commented Sep 25, 2024

For code in shading_controlnet_trainer.py,line 247reg_loss = torch.nn.functional.l1_loss(pred_mult_layer, torch.ones_like(pred_mult_layer)) reg_loss += torch.nn.functional.l1_loss(pred_div_layer, torch.ones_like(pred_div_layer))
why calculating L1 loss between pred outcome and all one tensor, is it to minimize over-exposure artifacts?
Ant it would be greatly appreciated if you could make the dataset and inference code available

@YoungQuasimodo YoungQuasimodo changed the title A question about "l1 loss " A question about "l1 loss" between pred outcome and all one tenson in trainer Sep 25, 2024
@YoungQuasimodo YoungQuasimodo changed the title A question about "l1 loss" between pred outcome and all one tenson in trainer A question about "l1 loss" between pred outcome and all one tensor in trainer Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant