Skip to content

TrustAIRLab/Label-Only-MIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Label-Only Membership Inference Attack

arXiv PyTorch

This is the code for our ACM CCS 21 paper Membership Leakage in Label-Only Exposures. We propose the first label-only membership inference attack that solely relies on the final prediction of the target model, i.e., the predicted label, as their attack model’s input.

Prepare

Users should install Python3.8 and PyTorch at first. We recommend using conda to install it based on the official documents.

Specifically, please follow the requirements below:

pytorch==2.2.0
numpy<1.24.0
foolbox==3.0.0
adversarial-robustness-toolbox==1.5.0
scipy==1.7.0
runx==0.0.11

For others, please install the latest version.

Run

Check argparse.ArgumentParser for dataset, model, etc.

Set args.action to run the following attacks.

Label-Only MIA Based on Shadow Model (Transfer Attack)

action=1 Train_Target_Model(args)

action=2 Train_Shadow_Model(args)

action=3 AdversaryOne(args)

Label-Only MIA Based on Decision Boundary (Boundary Attack)

action=1 Train_Target_Model(args)

action=4 AdversaryTwo(args)

Calculate the Radius to the Decision Boundary.

action=5 Decision_Radius(args)

Citation

@inproceedings{LZ21,
author = {Zheng Li and Yang Zhang},
title = {{Membership Leakage in Label-Only Exposures}},
booktitle = {{ACM SIGSAC Conference on Computer and Communications Security (CCS)}},
publisher = {ACM},
year = {2021}
}

License

Label-Only MIA is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at zhang[AT]cispa.de. We will send the detail agreement to you.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages