-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[solidago] gbt: estimate asymmetrical uncertainties based on increase of loss by 1 #1973
Conversation
This is intended. One interesting implications of this is that if a user says A is maximally better then B, then the comparison will yield an infinite right uncertainty on A, and an infinite uncertainty on B. Does this break something? If the uncertainty is too large (perhaps a feature rather than a bug in principle), the value + 1 in the equation may be changed to a smaller value. |
Ok 👍 I pushed the modification in bf95cbb. It seems to work. I was just a bit surprised to see such a big difference compared to the current expected values for uncertainties in the tests files. For example in "data_3.py":
|
@lenhoanglnh The uncertainty values close to 700 were actually due to a numerical issue. In practice, there are cases where the log-likelihood term never reaches the threshold |
@@ -29,7 +30,7 @@ def solve( | |||
------- | |||
out: float | |||
""" | |||
ymin, ymax = f(xmin) - value, f(xmax) - value | |||
ymin, ymax = f(xmin, *args) - value, f(xmax, *args) - value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Another way to do something similar would be to not change solve, but use it with a partial
https://docs.python.org/3/library/functools.html#functools.partial
|
||
@njit | ||
def f(delta, theta_diff, r, coord_indicator, ll_actual): | ||
return ll_function(theta_diff + delta * coord_indicator, r) - ll_actual - 1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Should this -1.0
be a constant: HIGH_LIKELIHOOD_RANGE_THRESHOLD = 1.0
?
pass | ||
|
||
@cached_property | ||
def loss_increase_to_solve(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Naming translated_negative_log_likelihood
(what it is, not what it is meant to be used for) + it's a log likelihood, not a loss
solidago/src/solidago/preference_learning/generalized_bradley_terry.py
Outdated
Show resolved
Hide resolved
…terry.py Co-authored-by: Louis Faucon <lpfaucon@gmail.com>
* Added import for vouchers and scores in pipline/inputs Fixed tiny_tournesol.zip file for testing. Added data_analysis for dataset submission. * Important change: Modified qr_quantile using asymmetric Huber rather than additional term. This implies that the addition of a new user with huge uncertainties will not affect the quantile much. * implement 'get_pipeline_kwargs' in TournesolInput * fix experiments script * read vouches in TournesolInput * [solidago] gbt: estimate asymmetrical uncertainties based on increase of neg. log likelihood by 1 (#1973) --------- Co-authored-by: Louis Faucon <lpfaucon@gmail.com> * implement 'get_pipeline_kwargs' in TournesolInput * fix experiments script * read vouches in TournesolInput * Fixed experiments calls to Tournesol inputs API * normalize weight per user in Standardize * normalize weight per user in QuantileZeroShift * solidago: fix numerical instability in gbt * fix wrong usage of 'med' in qr_uncertainty, expose high_likelihood_range_threshold in gbt args * add QuantileShift (in addition to QuantileZeroShift) to define target_score different from 0 * lbfgs: raise error when max_iter is reached * update ml_train to call new pipeline, tweaks in solidago to be consistent with existing tournesol tests * fix test_mehestan in solidago, standardize typing to reduce numba compilations * fix mehestan after refactoring * update test about scalings * fix lbfgs initialization when past scores are available --------- Co-authored-by: Adrien Matissart <a@matissart.net> Co-authored-by: Adrien Matissart <amatissart@users.noreply.github.com> Co-authored-by: Louis Faucon <lpfaucon@gmail.com>
Based on #1970
I struggled with the sign conventions, but I think I got something that works as expected.
TODO:
review the definition
@lenhoanglnh the paper suggests to only consider the negative log-likelihood term to estimate the uncertainties. Don't we need to consider the regularization term too? That what is done on this branch, because I observed very high values when it was not present.
the test data need to be updated with new uncertainties, after some sanity checks on the actual values
adapt the L-BFGS implementation to use the new uncertainties too (or split the tests)
Checklist