Skip to content

Commit

Permalink
Merge pull request #85 from ENSTA-U2IS-AI/dev
Browse files Browse the repository at this point in the history
🚀 Update to Lightning 2.0, Add Segmentation & Rework Regression
  • Loading branch information
o-laurent authored Mar 28, 2024
2 parents 79ec8af + 06b37c3 commit a59ff23
Show file tree
Hide file tree
Showing 243 changed files with 9,225 additions and 4,911 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
run: |
echo "PYTHON_VERSION=$(python -c "import platform; print(platform.python_version())")"
- name: Cache folder for Torch Uncertainty
- name: Cache folder for TorchUncertainty
uses: actions/cache@v3
id: cache-folder
with:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ jobs:
LICENSE
.gitignore
- name: Cache folder for Torch Uncertainty
- name: Cache folder for TorchUncertainty
if: steps.changed-files-specific.outputs.only_changed != 'true'
uses: actions/cache@v4
id: cache-folder
Expand All @@ -70,8 +70,8 @@ jobs:
- name: Check style & format
if: steps.changed-files-specific.outputs.only_changed != 'true'
run: |
python3 -m ruff check torch_uncertainty tests --no-fix
python3 -m ruff format torch_uncertainty tests --check
python3 -m ruff check torch_uncertainty --no-fix
python3 -m ruff format torch_uncertainty --check
- name: Test with pytest and compute coverage
if: steps.changed-files-specific.outputs.only_changed != 'true'
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ docs/*/auto_tutorials/
*.pth
*.ckpt
*.out
docs/source/sg_execution_times.rst
test**/*.csv

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
36 changes: 17 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<div align="center">

![Torch Uncertainty Logo](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/blob/main/docs/source/_static/images/torch_uncertainty.png)
![TorchUncertaintyLogo](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/blob/main/docs/source/_static/images/torch_uncertainty.png)

[![pypi](https://img.shields.io/pypi/v/torch_uncertainty.svg)](https://pypi.python.org/pypi/torch_uncertainty)
[![tests](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml/badge.svg?branch=main&event=push)](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml)
Expand All @@ -11,40 +11,42 @@
[![Discord Badge](https://dcbadge.vercel.app/api/server/HMCawt5MJu?compact=true&style=flat)](https://discord.gg/HMCawt5MJu)
</div>

_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!
_TorchUncertainty_ is a package designed to help you leverage [uncertainty quantification techniques](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning) and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!

:construction: _TorchUncertainty_ is in early development :construction: - expect changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties and join the [discord server](https://discord.gg/HMCawt5MJu).**

Our webpage and documentation is available here: [torch-uncertainty.github.io](https://torch-uncertainty.github.io).

---

This package provides a multi-level API, including:

- easy-to-use ⚡️ lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- deep learning baselines available for training on your datasets
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress 🚧).
- layers available for use in your networks
- scikit-learn style post-processing methods such as Temperature Scaling
- **layers**, **models**, **metrics**, & **losses** available for use in your networks
- scikit-learn style post-processing methods such as Temperature Scaling.

See the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.
Have a look at the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.

## Installation
## ⚙️ Installation

Install the desired PyTorch version in your environment.
TorchUncertainty requires Python 3.10 or greater. Install the desired PyTorch version in your environment.
Then, install the package from PyPI:

```sh
pip install torch-uncertainty
```

If you aim to contribute, have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).
The installation procedure for contributors is different: have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).

## Getting Started and Documentation
## :racehorse: Quickstart

Please find the documentation at [torch-uncertainty.github.io](https://torch-uncertainty.github.io).
We make a quickstart available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).

A quickstart is available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).
## :books: Implemented methods

## Implemented methods
TorchUncertainty currently supports **Classification**, **probabilistic** and pointwise **Regression** and **Segmentation**.

### Baselines

Expand All @@ -55,7 +57,7 @@ To date, the following deep learning baselines have been implemented:
- BatchEnsemble
- Masksembles
- MIMO
- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873)) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- Packed-Ensembles (see [Blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873)) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- Bayesian Neural Networks :construction: Work in progress :construction: - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- Regression with Beta Gaussian NLL Loss
- Deep Evidential Classification & Regression - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)
Expand All @@ -75,7 +77,7 @@ To date, the following post-processing methods have been implemented:

## Tutorials

We provide the following tutorials in our documentation:
Our documentation contains the following tutorials:

- [From a Standard Classifier to a Packed-Ensemble](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- [Training a Bayesian Neural Network in 3 minutes](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
Expand All @@ -84,10 +86,6 @@ We provide the following tutorials in our documentation:
- [Training a LeNet with Monte-Carlo Dropout](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- [Training a LeNet with Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)

## Awesome Uncertainty repositories

You may find a lot of papers about modern uncertainty estimation techniques on the [Awesome Uncertainty in Deep Learning](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning).

## Other References

This package also contains the official implementation of Packed-Ensembles.
Expand Down
107 changes: 48 additions & 59 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,58 +2,56 @@
Train a Bayesian Neural Network in Three Minutes
================================================
In this tutorial, we will train a Bayesian Neural Network (BNN) LeNet classifier on the MNIST dataset.
In this tutorial, we will train a variational inference Bayesian Neural Network (BNN) LeNet classifier on the MNIST dataset.
Foreword on Bayesian Neural Networks
------------------------------------
Bayesian Neural Networks (BNNs) are a class of neural networks that can estimate the uncertainty of their predictions via uncertainty on their weights. This is achieved by considering the weights of the neural network as random variables, and by learning their posterior distribution. This is in contrast to standard neural networks, which only learn a single set of weights, which can be seen as Dirac distributions on the weights.
Bayesian Neural Networks (BNNs) are a class of neural networks that estimate the uncertainty on their predictions via uncertainty
on their weights. This is achieved by considering the weights of the neural network as random variables, and by learning their
posterior distribution. This is in contrast to standard neural networks, which only learn a single set of weights, which can be
seen as Dirac distributions on the weights.
For more information on Bayesian Neural Networks, we refer the reader to the following resources:
- Weight Uncertainty in Neural Networks `ICML2015 <https://arxiv.org/pdf/1505.05424.pdf>`_
- Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users `IEEE Computational Intelligence Magazine <https://arxiv.org/pdf/2007.06823.pdf>`_
Training a Bayesian LeNet using TorchUncertainty models and PyTorch Lightning
-----------------------------------------------------------------------------
Training a Bayesian LeNet using TorchUncertainty models and Lightning
---------------------------------------------------------------------
In this part, we train a bayesian LeNet, based on the model and routines already implemented in TU.
1. Loading the utilities
~~~~~~~~~~~~~~~~~~~~~~~~
To train a BNN using TorchUncertainty, we have to load the following utilities from TorchUncertainty:
To train a BNN using TorchUncertainty, we have to load the following modules:
- the cli handler: cli_main and argument parser: init_args
- the model: bayesian_lenet, which lies in the torch_uncertainty.model module
- the classification training routine in the torch_uncertainty.training.classification module
- the Trainer from Lightning
- the model: bayesian_lenet, which lies in the torch_uncertainty.model
- the classification training routine from torch_uncertainty.routines
- the bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
- the datamodule that handles dataloaders: MNISTDataModule, which lies in the torch_uncertainty.datamodule
"""
- the datamodule that handles dataloaders: MNISTDataModule from torch_uncertainty.datamodules
from torch_uncertainty import cli_main, init_args
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import ELBOLoss
from torch_uncertainty.models.lenet import bayesian_lenet
from torch_uncertainty.routines.classification import ClassificationSingle
We will also need to define an optimizer using torch.optim, the
neural network utils from torch.nn, as well as the partial util to provide
the modified default arguments for the ELBO loss.
"""

# %%
# We will also need to define an optimizer using torch.optim as well as the
# neural network utils withing torch.nn, as well as the partial util to provide
# the modified default arguments for the ELBO loss.
#
# We also import sys to override the command line arguments.

import os
from functools import partial
from pathlib import Path
import sys

from lightning.pytorch import Trainer
from torch import nn, optim

from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import ELBOLoss
from torch_uncertainty.models.lenet import bayesian_lenet
from torch_uncertainty.routines import ClassificationRoutine

# %%
# 2. Creating the Optimizer Wrapper
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 2. The Optimization Recipe
# ~~~~~~~~~~~~~~~~~~~~~~~~~~
# We will use the Adam optimizer with the default learning rate of 0.001.


Expand All @@ -69,26 +67,19 @@ def optim_lenet(model: nn.Module) -> dict:
# 3. Creating the necessary variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# In the following, we will need to define the root of the datasets and the
# logs, and to fake-parse the arguments needed for using the PyTorch Lightning
# Trainer. We also create the datamodule that handles the MNIST dataset,
# dataloaders and transforms. Finally, we create the model using the
# blueprint from torch_uncertainty.models.

root = Path(os.path.abspath(""))
# In the following, we define the Lightning trainer, the root of the datasets and the logs.
# We also create the datamodule that handles the MNIST dataset, dataloaders and transforms.
# Please note that the datamodules can also handle OOD detection by setting the eval_ood
# parameter to True. Finally, we create the model using the blueprint from torch_uncertainty.models.

# We mock the arguments for the trainer
sys.argv = ["file.py", "--max_epochs", "1", "--enable_progress_bar", "False"]
args = init_args(datamodule=MNISTDataModule)

net_name = "logs/bayesian-lenet-mnist"
trainer = Trainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)

# datamodule
args.root = str(root / "data")
dm = MNISTDataModule(**vars(args))
root = Path("") / "data"
datamodule = MNISTDataModule(root=root, batch_size=128, eval_ood=False)

# model
model = bayesian_lenet(dm.num_channels, dm.num_classes)
model = bayesian_lenet(datamodule.num_channels, datamodule.num_classes)

# %%
# 4. The Loss and the Training Routine
Expand All @@ -99,39 +90,36 @@ def optim_lenet(model: nn.Module) -> dict:
# library. As we are train a classification model, we use the CrossEntropyLoss
# as the likelihood.
# We then define the training routine using the classification training routine
# from torch_uncertainty.training.classification. We provide the model, the ELBO
# loss and the optimizer, as well as all the default arguments.
# from torch_uncertainty.classification. We provide the model, the ELBO
# loss and the optimizer to the routine.

loss = partial(
ELBOLoss,
loss = ELBOLoss(
model=model,
criterion=nn.CrossEntropyLoss(),
inner_loss=nn.CrossEntropyLoss(),
kl_weight=1 / 50000,
num_samples=3,
)

baseline = ClassificationSingle(
routine = ClassificationRoutine(
model=model,
num_classes=dm.num_classes,
in_channels=dm.num_channels,
num_classes=datamodule.num_classes,
loss=loss,
optimization_procedure=optim_lenet,
**vars(args),
optim_recipe=optim_lenet(model),
)

# %%
# 5. Gathering Everything and Training the Model
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Now that we have prepared all of this, we just have to gather everything in
# the main function and to train the model using the PyTorch Lightning Trainer.
# Specifically, it needs the baseline, that includes the model as well as the
# training routine, the datamodule, the root for the datasets and the logs, the
# name of the model for the logs and all the training arguments.
# the main function and to train the model using the Lightning Trainer.
# Specifically, it needs the routine, that includes the model as well as the
# training/eval logic and the datamodule
# The dataset will be downloaded automatically in the root/data folder, and the
# logs will be saved in the root/logs folder.

results = cli_main(baseline, dm, root, net_name, args)
trainer.fit(model=routine, datamodule=datamodule)
trainer.test(model=routine, datamodule=datamodule)

# %%
# 6. Testing the Model
Expand All @@ -140,19 +128,20 @@ def optim_lenet(model: nn.Module) -> dict:
# Now that the model is trained, let's test it on MNIST

import matplotlib.pyplot as plt
import numpy as np
import torch
import torchvision

import numpy as np


def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.axis("off")
plt.tight_layout()
plt.show()


dataiter = iter(dm.val_dataloader())
dataiter = iter(datamodule.val_dataloader())
images, labels = next(dataiter)

# print images
Expand Down
2 changes: 1 addition & 1 deletion auto_tutorials_source/tutorial_corruptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ def show_images(transform):

#%%
# 10. Frost
# ~~~~~~~~
# ~~~~~~~~~
from torch_uncertainty.transforms.corruptions import Frost

show_images(Frost)
Expand Down
Loading

0 comments on commit a59ff23

Please sign in to comment.