This is the code for our ACM CCS 21 paper Membership Leakage in Label-Only Exposures. We propose the first label-only membership inference attack that solely relies on the final prediction of the target model, i.e., the predicted label, as their attack model’s input.
Users should install Python3.8 and PyTorch at first. We recommend using conda to install it based on the official documents.
Specifically, please follow the requirements below:
pytorch==2.2.0
numpy<1.24.0
foolbox==3.0.0
adversarial-robustness-toolbox==1.5.0
scipy==1.7.0
runx==0.0.11
For others, please install the latest version.
Check argparse.ArgumentParser
for dataset, model, etc.
Set args.action
to run the following attacks.
action=1
Train_Target_Model(args)
action=2
Train_Shadow_Model(args)
action=3
AdversaryOne(args)
action=1
Train_Target_Model(args)
action=4
AdversaryTwo(args)
action=5
Decision_Radius(args)
@inproceedings{LZ21,
author = {Zheng Li and Yang Zhang},
title = {{Membership Leakage in Label-Only Exposures}},
booktitle = {{ACM SIGSAC Conference on Computer and Communications Security (CCS)}},
publisher = {ACM},
year = {2021}
}
Label-Only MIA is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at zhang[AT]cispa.de. We will send the detail agreement to you.