Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log regr with brier loss #129

Open
berndbischl opened this issue Jan 13, 2023 · 2 comments
Open

log regr with brier loss #129

berndbischl opened this issue Jan 13, 2023 · 2 comments

Comments

@berndbischl
Copy link
Contributor

we should say that this can also be naively considered

we have a nice exercise in optimml, which shows that this is not necessarly convex.

this has also sometimes in the old days discussed as an option for ANNs, and downsides have been discussed there

@ludwigbothmann
Copy link
Contributor

bsp zur minimierung von den losses bei lt1 eibauen.
squared loss für classification noch einbauen?
squared loss auf f, oder pi(x)?
beides sinnvoll?

@tpielok tpielok transferred this issue from slds-lmu/lecture_i2ml May 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants