Official Repository for ICLR'23 Paper "Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning"
We extended the CLS-ER repo with our method
-
Use
python main.py
to run experiments. -
Use argument
--load_best_args
to use the best hyperparameters for each of the evaluation setting from the paper. -
To reproduce the results in the paper run the following
python main.py --dataset <dataset> --model <model> --buffer_size <buffer_size> --load_best_args
python main.py --dataset seq-cifar10 --model esmer --buffer_size 200 --load_best_args python main.py --dataset seq-cifar100 --model esmer --buffer_size 200 --load_best_args
python main.py --dataset gcil-cifar100 --weight_dist unif --model esmer --buffer_size 200 --load_best_args python main.py --dataset gcil-cifar100 --weight_dist longtail --model esmer --buffer_size 200 --load_best_args
- torch==1.7.0
- torchvision==0.9.0
If you find the code useful in your research, please consider citing our paper:
@inproceedings{
sarfrazerror,
title={Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning},
author={Sarfraz, Fahad and Arani, Elahe and Zonooz, Bahram},
booktitle={The Eleventh International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=zlbci7019Z3}
}