Assignments, projects and cheatsheet for the ETH Zurich Reliable and Interpretable Artificial Intelligence class autumn 2018.
The class covered the following topics:
- Adverserial Examples
- Fast Gradient Sign Method (FGSM)
- Projected Gradient Descent (PGD)
- Training Neural Networks with Logic
- Certify AI with Abstract Domains
- Visualization of Neural Networks
- Probabilistic Programming
Teamwork project as part of the class. The goal of the project is to design a precise and scalable automated verifier for proving the robustness of fully connected feedforward neural networks with rectified linear activations (ReLU) against adversarial attacks. More information can be found here.
I summarized some of the most importants topics in a cheatsheet.
Assignment directories contain pip
requirement files to install python package dependencies:
virtualenv -p python3.6 .env
. .env/bin/activate
pip install requirements.txt
Simple FGSM and PGD examples for MNIST dataset.
Defend and train neural network against PGD attacks for MNIST dataset.
Train neural network with logical constrains for MNIST dataset.
Abstract representation of neural network in the interval domain.
Abstract representation of neural network in the zonotop domain.
Probabilisitc programming.