This project aims to use PyTorch Lightning to implement state-of-the-art algorithms in semi-supervised leanring (SSL).
The semi-supervised learning is to leverage abundant unlabeled samples to improve models under the the scenario of scarce data. There are several assumptions which are usually used in semi-supervised learning,
- Smoothness assumption
- Low-density assumption
- Manifold assumption
Most of the approaches try to exploit regularization on models to satisfy the assumptions. In this repository, we will first focus on methods using consistency loss.
PyTorch Lightning is a PyTorch Wrapper to standardize the training and testing process of AI projects. The projects using PyTorch Lightning can focus on the implementation of the algorithm, and there is no need to worry about some complicated engineering parts, such as multi-GPU training, 16-bit precision, Tensorboard logging, and TPU training.
In this project, we leverage PyTorch Lightning as the coding backbone and implement algorithms with minimum changes. The necessary implementation of a new algorithm is put in module
.
pip install -r requirements.txt
configs
: Contained the config files for approaches.models
: Contained all the models.dataloader
: Data loader for every dataset.module
: SSL modules inheritedpytorch_lightning.LightningModule
.
To implement a new method, one usually need to define new config, data loader and PL module.
- Please refer to
argparser.py
for hyperparameters. read_from_tb.py
is used to extract the final accuracies fromtensorboard
logs.
python main.py -c configs/config_mixup.ini -g [GPU ID] --affix [FILE NAME]
python main.py -c configs/config_mixmatch.ini -g [GPU ID] --num_labeled [NUMBER OF UNLABELED DATA] --affix [FILE NAME]
The result is the average of three runs (seed=1, 2, 3).
Acc | |
---|---|
full train with mixup | 4.41±0.03 |
The experiments run for five times (seed=1,2,3,4,5) in the paper, but only three times (seed=1,2,3) for this implementation.
time (hr:min) | 250 | 500 | 1000 | 2000 | 4000 | |
---|---|---|---|---|---|---|
Paper | 11.08±0.87 | 9.65±0.94 | 7.75±0.32 | 7.03±0.15 | 6.24±0.06 | |
Reproduce | 17:24 | 10.93±1.20 | 9.72±0.63 | 8.02±0.74 | - | - |
This repo | 17:40 | 11.10±1.00 | 10.05±0.45 | 8.00±0.42 | 7.13±0.13 | 6.22±0.08 |
- Remixmatch
- Fixmatch
- GAN based method (DSGAN, BadGAN)
- Other approach using consistency loss (VAT, mean teacher)
- Polish the code for
CustomSemiDataset
indata_loader/base_data.py