You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for this greate toolbox. However, I have the same problem as described here (#48 (comment)). This is mainly due to the method for the calibration mapping. For method=mle the ECE (and the other metrics) is equal to the uncalibrated ECE. This applies to both TemperatureScaling and LogisticCalibration. If you change the method to mcmc, the problem no longer occurs. I am currently using version 1.3.6 of netcal. Below is an example code based on your example and the ReadMe:
import numpy as np
from netcal.metrics import ECE
from netcal.scaling import TemperatureScaling, LogisticCalibration
from sklearn.model_selection import train_test_split
# load data
input = np.load("records/cifar100/wideresnet-16-4-cifar-100.npz")
predictions = input['predictions']
ground_truth = input['ground_truth']
# split data set into build set and validation set
pred_train, pred_val, lbl_train, lbl_val = train_test_split(predictions, ground_truth,
test_size=0.7,
stratify=ground_truth,
random_state=None)
# apply TS
temperature = TemperatureScaling(detection=False, use_cuda=True, method='mle')
temperature.fit(pred_train, lbl_train)
calibrated_ts = temperature.transform(pred_val)
apply LR
lr = LogisticCalibration(detection=False, use_cuda=True, method='mle')
lr.fit(pred_train, lbl_train)
calibrated_lr = lr.transform(pred_val)
# Evaluate
n_bins = 10
ece = ECE(n_bins)
uncalibrated_score = ece.measure(pred_val, lbl_val)
calibrated_score_ts = ece.measure(calibrated_ts, lbl_val)
calibrated_score_lr = ece.measure(calibrated_lr, lbl_val)
print(f'uncalibrated ECE: {uncalibrated_score}')
print(f'calibrated ECE with TS: {calibrated_score_ts}')
print(f'calibrated ECE with LR: {calibrated_score_lr}')
The output:
uncalibrated ECE: 0.05723183405505762
calibrated ECE with TS: 0.05723081579165799
calibrated ECE with LR: 0.05723081579165799
Hi, could also verify this behavior on different datasets or only on the example data? Did you already figured out a solution? Unfortunately, I don't have the time yet to go more into detail. Please feel free to add a possible bugfix if you already found a solution.
Just ran into the same issue and noticed that the problem is that scipy.optimize.minimize after two function evaluations says the gradient of the objective is zero and stops. However, I don't want to dig through all the pyro stuff in the objective to figure out what the problem is.
Hello, thank you for this greate toolbox. However, I have the same problem as described here (#48 (comment)). This is mainly due to the method for the calibration mapping. For
method=mle
the ECE (and the other metrics) is equal to the uncalibrated ECE. This applies to both TemperatureScaling and LogisticCalibration. If you change the method tomcmc
, the problem no longer occurs. I am currently using version 1.3.6 ofnetcal
. Below is an example code based on your example and the ReadMe:The output:
using
mcmc
instead ofmle
:The text was updated successfully, but these errors were encountered: