Skip to content

Latest commit

 

History

History
2 lines (2 loc) · 341 Bytes

README.md

File metadata and controls

2 lines (2 loc) · 341 Bytes

PyTorch-Adversarial-Examples

Adversarial attacks against CIFAR-10 and MNIST. The notebooks use IBM's Adversarial Robustness Toolbox (ART) to generate adversarial examples to attack PyTorch models. Might include more methods against more datasets in the future.