This repository explores the application of compositional game theory, as introduced in the paper Compositional Game Theory [1] to the analysis and enhancement of neural networks. We represent neural network components as players in open games, aiming to leverage game-theoretic tools for improved training and understanding
Repository includes:
- cgtnnlib, a library for performing the research
data
directory with some of the data we usedoc
directory with documentation- notebooks for running experiments
As of now, the main branch is in flux and many of the stable releases are available at https://disk.yandex.ru/d/aZozDpBlzh_z1A
- Create a virtual environment
- Install dependencies:
pip install -r requirements.txt
- Open a notebook
*.ipynb
file with any .ipynb reader available to you and run
The library consists of classes (with filenames beginning with capital letter) that represent problem domain (Dataset, Report, etc.) and several procedural modules:
common.py
: main functions and evaluationanalyze.py
: reads report JSON files and plots graphsdatasets.py
: dataset definitionsplt_extras.py
: matplotlib extensionstorch_device.py
: abstracts away PyTorch device selectiontraining.py
: training procedures- etc.
The nn
subdirectory contains PyTorch modules and functions that represent
neural architectures we evaluate
The doc
subdirectory contains info about datasets.
Trained models are stored in the pth/
directory. Along with each
model, a corresponding JSON file is also created which contains
properties like:
started
: date of report creationsaved
: date of last updatemodel
: model parameters, like classname and hyperparameter valuedataset
: dataset info, including the type of learning task (regression/classification)loss
: an array of loss values during each iteration of training, for analyzing loss curveseval
: an object that contains various values of "noise_factor", that represents noise mixed into the input during evaluation, and their corresponding evaluation metrics values: "r2" and "mse" for regression, and "f1", "accuracy", "roc_auc" for classification- other, experiment-specific keys
Typically a report is created during the model creation and initial
training, and then updated during evaluation. This two-step process
creates the complete report to be analyzed by analyze.py
.
- N. Ghani, J. Hedges, V. Winschel, and P. Zahn. Compositional game theory. Mar 2016. https://arxiv.org/abs/1603.04641