We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
we should say that this can also be naively considered
we have a nice exercise in optimml, which shows that this is not necessarly convex.
this has also sometimes in the old days discussed as an option for ANNs, and downsides have been discussed there
The text was updated successfully, but these errors were encountered:
bsp zur minimierung von den losses bei lt1 eibauen. squared loss für classification noch einbauen? squared loss auf f, oder pi(x)? beides sinnvoll?
Sorry, something went wrong.
https://towardsdatascience.com/why-using-mean-squared-error-mse-cost-function-for-binary-classification-is-a-bad-idea-933089e90df7
No branches or pull requests
we should say that this can also be naively considered
we have a nice exercise in optimml, which shows that this is not necessarly convex.
this has also sometimes in the old days discussed as an option for ANNs, and downsides have been discussed there
The text was updated successfully, but these errors were encountered: