Adversarial attacks against CIFAR-10 and MNIST. The notebooks use IBM's Adversarial Robustness Toolbox (ART) to generate adversarial examples to attack PyTorch models. Might include more methods against more datasets in the future.
-
Notifications
You must be signed in to change notification settings - Fork 0
antoninodimaggio/PyTorch-Adversarial-Examples
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Adversarial attacks against CIFAR-10 and MNIST