Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TemperatureScaling and LogisticCalibration do not work correctly #61

Open
salokin1997 opened this issue Sep 2, 2024 · 2 comments
Open

Comments

@salokin1997
Copy link

Hello, thank you for this greate toolbox. However, I have the same problem as described here (#48 (comment)). This is mainly due to the method for the calibration mapping. For method=mle the ECE (and the other metrics) is equal to the uncalibrated ECE. This applies to both TemperatureScaling and LogisticCalibration. If you change the method to mcmc, the problem no longer occurs. I am currently using version 1.3.6 of netcal. Below is an example code based on your example and the ReadMe:

import numpy as np
from netcal.metrics import ECE
from netcal.scaling import TemperatureScaling, LogisticCalibration
from sklearn.model_selection import train_test_split

# load data
input = np.load("records/cifar100/wideresnet-16-4-cifar-100.npz")
predictions = input['predictions']
ground_truth = input['ground_truth']

# split data set into build set and validation set
pred_train, pred_val, lbl_train, lbl_val = train_test_split(predictions, ground_truth,
                                                            test_size=0.7,
                                                            stratify=ground_truth,
                                                            random_state=None)

# apply TS
temperature = TemperatureScaling(detection=False, use_cuda=True, method='mle')
temperature.fit(pred_train, lbl_train)
calibrated_ts = temperature.transform(pred_val)

apply LR
lr = LogisticCalibration(detection=False, use_cuda=True, method='mle')
lr.fit(pred_train, lbl_train)
calibrated_lr = lr.transform(pred_val)

# Evaluate
n_bins = 10
ece = ECE(n_bins)
uncalibrated_score = ece.measure(pred_val, lbl_val)
calibrated_score_ts = ece.measure(calibrated_ts, lbl_val)
calibrated_score_lr = ece.measure(calibrated_lr, lbl_val)

print(f'uncalibrated ECE: {uncalibrated_score}')
print(f'calibrated ECE with TS: {calibrated_score_ts}')
print(f'calibrated ECE with LR: {calibrated_score_lr}')

The output:

uncalibrated ECE: 0.05723183405505762
calibrated ECE with TS: 0.05723081579165799
calibrated ECE with LR: 0.05723081579165799

using mcmc instead of mle:

Sample: 100%|██████████| 350/350 [00:03, 96.51it/s, step size=7.82e-01, acc. prob=0.944] 
Sample: 100%|██████████| 350/350 [04:27,  1.31it/s, step size=3.79e-02, acc. prob=0.985]
uncalibrated ECE: 0.058336610062313915
calibrated ECE with TS: 0.039346123378723855
calibrated ECE with LR: 0.03517598363437825
@fabiankueppers
Copy link
Collaborator

Hi, could also verify this behavior on different datasets or only on the example data? Did you already figured out a solution? Unfortunately, I don't have the time yet to go more into detail. Please feel free to add a possible bugfix if you already found a solution.

@dholzmueller
Copy link

Just ran into the same issue and noticed that the problem is that scipy.optimize.minimize after two function evaluations says the gradient of the objective is zero and stops. However, I don't want to dig through all the pyro stuff in the objective to figure out what the problem is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants