This repository contains code for intermodality registration of paired images (e.g., from the same subject). The method is synthesis based using two different losses: (i) a registration loss for image translation at the image level that capitalises on a pre-trained intramodality registration network and (ii) a structure preserving constraint based on contrastive learning. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction.
-
data
Contains necessary data for the Allen and BigBrain datasets (reported in the paper [1]) -
database
Contains I/O code for loading the datasets -
scripts
Contains different scripts to train networks. Each folder contains a dedicated configuration file. -
src
Source code containing layer, models, losses and data loaders.
Python
The code run on python v3.8.5 and several external libraries listed under requirements.txt
-
Set-up configuration files
- setup.py: specify data and results directory. Currently pointing to ./data and ./results.
-
(Optional) Train intramodality registraion networks
- scripts/Registration/*/train.py: train intramodality registration networks with desired parameters in configFiles from the same directory and from command line. Pre-trained registration networks are available in the results folder for both the Allen and BigBrain datasets.
-
Train intermodality registraion networks: when using other intramodality registration networks than the ones provided, need to specify the new path in the corresponding configuration files.
- scripts/InfoNCE/*/train_noGAN.py: train SbR method with parameters specified either in the configFile or from the command line. When specifygin --l_nce 0, the SbR-N is used.
- scripts/InfoNCE/*/train.py: train the SbR-G extension method with parameters specified either in the configFile or from the command line.
- scripts/CycleGAN/*/train.py: train the CycleGAN baseline method, using the approach in [2] together with our registration loss.
- scripts/RoT/*/train.py: train the RoT baseline method [3]
02 August 2021:
- Initial commit
[1] https://arxiv.org/abs/2107.14449
[2] Jun-Yan Zhu*, Taesung Park*, Phillip Isola, and Alexei A. Efros. "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks", in IEEE International Conference on Computer Vision (ICCV), 2017. [3] Arar, M., Ginger, Y., Danon, D., Bermano, A. H., & Cohen-Or, D. (2020). Unsupervised multi-modal image registration via geometry preserving image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13410-13419).